00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1009 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3676 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.123 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.203 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.203 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.809 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.820 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.832 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.832 > git config core.sparsecheckout # timeout=10 00:00:05.842 > git read-tree -mu HEAD # timeout=10 00:00:05.856 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.877 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.877 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.995 [Pipeline] Start of Pipeline 00:00:06.012 [Pipeline] library 00:00:06.014 Loading library shm_lib@master 00:00:07.447 Library shm_lib@master is cached. Copying from home. 00:00:07.496 [Pipeline] node 00:00:07.655 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.673 [Pipeline] { 00:00:07.708 [Pipeline] catchError 00:00:07.711 [Pipeline] { 00:00:07.735 [Pipeline] wrap 00:00:07.749 [Pipeline] { 00:00:07.781 [Pipeline] stage 00:00:07.785 [Pipeline] { (Prologue) 00:00:07.804 [Pipeline] echo 00:00:07.807 Node: VM-host-SM9 00:00:07.816 [Pipeline] cleanWs 00:00:07.824 [WS-CLEANUP] Deleting project workspace... 00:00:07.824 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.830 [WS-CLEANUP] done 00:00:08.124 [Pipeline] setCustomBuildProperty 00:00:08.184 [Pipeline] httpRequest 00:00:09.028 [Pipeline] echo 00:00:09.029 Sorcerer 10.211.164.20 is alive 00:00:09.038 [Pipeline] retry 00:00:09.040 [Pipeline] { 00:00:09.054 [Pipeline] httpRequest 00:00:09.058 HttpMethod: GET 00:00:09.059 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.059 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.061 Response Code: HTTP/1.1 200 OK 00:00:09.061 Success: Status code 200 is in the accepted range: 200,404 00:00:09.062 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.665 [Pipeline] } 00:00:09.683 [Pipeline] // retry 00:00:09.690 [Pipeline] sh 00:00:09.973 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.988 [Pipeline] httpRequest 00:00:10.337 [Pipeline] echo 00:00:10.339 Sorcerer 10.211.164.20 is alive 00:00:10.349 [Pipeline] retry 00:00:10.351 [Pipeline] { 00:00:10.369 [Pipeline] httpRequest 00:00:10.374 HttpMethod: GET 00:00:10.375 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.376 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.391 Response Code: HTTP/1.1 200 OK 00:00:10.392 Success: Status code 200 is in the accepted range: 200,404 00:00:10.393 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:47.290 [Pipeline] } 00:00:47.310 [Pipeline] // retry 00:00:47.320 [Pipeline] sh 00:00:47.600 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:50.146 [Pipeline] sh 00:00:50.428 + git -C spdk log --oneline -n5 00:00:50.428 c13c99a5e test: Various fixes for Fedora40 00:00:50.428 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:50.428 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:50.428 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:50.428 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:50.448 [Pipeline] withCredentials 00:00:50.459 > git --version # timeout=10 00:00:50.472 > git --version # 'git version 2.39.2' 00:00:50.488 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:50.490 [Pipeline] { 00:00:50.500 [Pipeline] retry 00:00:50.502 [Pipeline] { 00:00:50.518 [Pipeline] sh 00:00:50.799 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:51.071 [Pipeline] } 00:00:51.090 [Pipeline] // retry 00:00:51.097 [Pipeline] } 00:00:51.115 [Pipeline] // withCredentials 00:00:51.127 [Pipeline] httpRequest 00:00:51.517 [Pipeline] echo 00:00:51.522 Sorcerer 10.211.164.20 is alive 00:00:51.537 [Pipeline] retry 00:00:51.539 [Pipeline] { 00:00:51.550 [Pipeline] httpRequest 00:00:51.553 HttpMethod: GET 00:00:51.554 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.554 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.565 Response Code: HTTP/1.1 200 OK 00:00:51.566 Success: Status code 200 is in the accepted range: 200,404 00:00:51.566 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:13.561 [Pipeline] } 00:01:13.578 [Pipeline] // retry 00:01:13.586 [Pipeline] sh 00:01:13.867 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:15.259 [Pipeline] sh 00:01:15.544 + git -C dpdk log --oneline -n5 00:01:15.544 caf0f5d395 version: 22.11.4 00:01:15.544 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:15.544 dc9c799c7d vhost: fix missing spinlock unlock 00:01:15.544 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:15.544 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:15.565 [Pipeline] writeFile 00:01:15.582 [Pipeline] sh 00:01:15.867 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.881 [Pipeline] sh 00:01:16.162 + cat autorun-spdk.conf 00:01:16.162 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.162 SPDK_TEST_NVMF=1 00:01:16.162 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.162 SPDK_TEST_URING=1 00:01:16.162 SPDK_TEST_USDT=1 00:01:16.162 SPDK_RUN_UBSAN=1 00:01:16.162 NET_TYPE=virt 00:01:16.162 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:16.162 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:16.162 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.170 RUN_NIGHTLY=1 00:01:16.172 [Pipeline] } 00:01:16.188 [Pipeline] // stage 00:01:16.206 [Pipeline] stage 00:01:16.209 [Pipeline] { (Run VM) 00:01:16.224 [Pipeline] sh 00:01:16.504 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:16.504 + echo 'Start stage prepare_nvme.sh' 00:01:16.504 Start stage prepare_nvme.sh 00:01:16.504 + [[ -n 1 ]] 00:01:16.504 + disk_prefix=ex1 00:01:16.504 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:16.505 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:16.505 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:16.505 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.505 ++ SPDK_TEST_NVMF=1 00:01:16.505 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.505 ++ SPDK_TEST_URING=1 00:01:16.505 ++ SPDK_TEST_USDT=1 00:01:16.505 ++ SPDK_RUN_UBSAN=1 00:01:16.505 ++ NET_TYPE=virt 00:01:16.505 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:16.505 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:16.505 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.505 ++ RUN_NIGHTLY=1 00:01:16.505 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:16.505 + nvme_files=() 00:01:16.505 + declare -A nvme_files 00:01:16.505 + backend_dir=/var/lib/libvirt/images/backends 00:01:16.505 + nvme_files['nvme.img']=5G 00:01:16.505 + nvme_files['nvme-cmb.img']=5G 00:01:16.505 + nvme_files['nvme-multi0.img']=4G 00:01:16.505 + nvme_files['nvme-multi1.img']=4G 00:01:16.505 + nvme_files['nvme-multi2.img']=4G 00:01:16.505 + nvme_files['nvme-openstack.img']=8G 00:01:16.505 + nvme_files['nvme-zns.img']=5G 00:01:16.505 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:16.505 + (( SPDK_TEST_FTL == 1 )) 00:01:16.505 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:16.505 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:16.505 + for nvme in "${!nvme_files[@]}" 00:01:16.505 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:16.505 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.505 + for nvme in "${!nvme_files[@]}" 00:01:16.505 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:16.505 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.505 + for nvme in "${!nvme_files[@]}" 00:01:16.505 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:16.505 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:16.505 + for nvme in "${!nvme_files[@]}" 00:01:16.505 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:16.505 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.505 + for nvme in "${!nvme_files[@]}" 00:01:16.505 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:16.505 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.505 + for nvme in "${!nvme_files[@]}" 00:01:16.505 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:16.763 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.763 + for nvme in "${!nvme_files[@]}" 00:01:16.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:16.763 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.763 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:16.763 + echo 'End stage prepare_nvme.sh' 00:01:16.763 End stage prepare_nvme.sh 00:01:16.776 [Pipeline] sh 00:01:17.060 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:17.061 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:17.320 00:01:17.320 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:17.320 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:17.320 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:17.320 HELP=0 00:01:17.320 DRY_RUN=0 00:01:17.320 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:17.320 NVME_DISKS_TYPE=nvme,nvme, 00:01:17.320 NVME_AUTO_CREATE=0 00:01:17.320 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:17.320 NVME_CMB=,, 00:01:17.320 NVME_PMR=,, 00:01:17.320 NVME_ZNS=,, 00:01:17.320 NVME_MS=,, 00:01:17.320 NVME_FDP=,, 00:01:17.320 SPDK_VAGRANT_DISTRO=fedora39 00:01:17.320 SPDK_VAGRANT_VMCPU=10 00:01:17.320 SPDK_VAGRANT_VMRAM=12288 00:01:17.320 SPDK_VAGRANT_PROVIDER=libvirt 00:01:17.320 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:17.320 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:17.320 SPDK_OPENSTACK_NETWORK=0 00:01:17.320 VAGRANT_PACKAGE_BOX=0 00:01:17.320 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:17.320 FORCE_DISTRO=true 00:01:17.320 VAGRANT_BOX_VERSION= 00:01:17.320 EXTRA_VAGRANTFILES= 00:01:17.320 NIC_MODEL=e1000 00:01:17.320 00:01:17.320 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:17.320 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.608 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.608 ==> default: Creating image (snapshot of base box volume). 00:01:20.867 ==> default: Creating domain with the following settings... 00:01:20.867 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732828124_975c95296df170f3038c 00:01:20.867 ==> default: -- Domain type: kvm 00:01:20.867 ==> default: -- Cpus: 10 00:01:20.867 ==> default: -- Feature: acpi 00:01:20.867 ==> default: -- Feature: apic 00:01:20.867 ==> default: -- Feature: pae 00:01:20.867 ==> default: -- Memory: 12288M 00:01:20.867 ==> default: -- Memory Backing: hugepages: 00:01:20.867 ==> default: -- Management MAC: 00:01:20.867 ==> default: -- Loader: 00:01:20.867 ==> default: -- Nvram: 00:01:20.867 ==> default: -- Base box: spdk/fedora39 00:01:20.867 ==> default: -- Storage pool: default 00:01:20.867 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732828124_975c95296df170f3038c.img (20G) 00:01:20.867 ==> default: -- Volume Cache: default 00:01:20.868 ==> default: -- Kernel: 00:01:20.868 ==> default: -- Initrd: 00:01:20.868 ==> default: -- Graphics Type: vnc 00:01:20.868 ==> default: -- Graphics Port: -1 00:01:20.868 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.868 ==> default: -- Graphics Password: Not defined 00:01:20.868 ==> default: -- Video Type: cirrus 00:01:20.868 ==> default: -- Video VRAM: 9216 00:01:20.868 ==> default: -- Sound Type: 00:01:20.868 ==> default: -- Keymap: en-us 00:01:20.868 ==> default: -- TPM Path: 00:01:20.868 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.868 ==> default: -- Command line args: 00:01:20.868 ==> default: -> value=-device, 00:01:20.868 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:20.868 ==> default: -> value=-drive, 00:01:20.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:20.868 ==> default: -> value=-device, 00:01:20.868 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.868 ==> default: -> value=-device, 00:01:20.868 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:20.868 ==> default: -> value=-drive, 00:01:20.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:20.868 ==> default: -> value=-device, 00:01:20.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.868 ==> default: -> value=-drive, 00:01:20.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:20.868 ==> default: -> value=-device, 00:01:20.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.868 ==> default: -> value=-drive, 00:01:20.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:20.868 ==> default: -> value=-device, 00:01:20.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.868 ==> default: Creating shared folders metadata... 00:01:20.868 ==> default: Starting domain. 00:01:22.296 ==> default: Waiting for domain to get an IP address... 00:01:40.387 ==> default: Waiting for SSH to become available... 00:01:40.387 ==> default: Configuring and enabling network interfaces... 00:01:42.919 default: SSH address: 192.168.121.244:22 00:01:42.919 default: SSH username: vagrant 00:01:42.919 default: SSH auth method: private key 00:01:44.823 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:52.939 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:58.211 ==> default: Mounting SSHFS shared folder... 00:01:59.591 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.591 ==> default: Checking Mount.. 00:02:00.969 ==> default: Folder Successfully Mounted! 00:02:00.969 ==> default: Running provisioner: file... 00:02:01.536 default: ~/.gitconfig => .gitconfig 00:02:02.101 00:02:02.101 SUCCESS! 00:02:02.101 00:02:02.101 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:02.101 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.101 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:02.101 00:02:02.110 [Pipeline] } 00:02:02.126 [Pipeline] // stage 00:02:02.135 [Pipeline] dir 00:02:02.136 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:02.138 [Pipeline] { 00:02:02.152 [Pipeline] catchError 00:02:02.154 [Pipeline] { 00:02:02.168 [Pipeline] sh 00:02:02.452 + vagrant ssh-config --host vagrant 00:02:02.452 + sed -ne /^Host/,$p 00:02:02.452 + tee ssh_conf 00:02:05.744 Host vagrant 00:02:05.744 HostName 192.168.121.244 00:02:05.744 User vagrant 00:02:05.744 Port 22 00:02:05.744 UserKnownHostsFile /dev/null 00:02:05.744 StrictHostKeyChecking no 00:02:05.744 PasswordAuthentication no 00:02:05.744 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.744 IdentitiesOnly yes 00:02:05.744 LogLevel FATAL 00:02:05.744 ForwardAgent yes 00:02:05.744 ForwardX11 yes 00:02:05.744 00:02:05.759 [Pipeline] withEnv 00:02:05.762 [Pipeline] { 00:02:05.778 [Pipeline] sh 00:02:06.057 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:06.057 source /etc/os-release 00:02:06.057 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.057 # Minimal, systemd-like check. 00:02:06.057 if [[ -e /.dockerenv ]]; then 00:02:06.057 # Clear garbage from the node's name: 00:02:06.057 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.057 # $HOSTNAME is the actual container id 00:02:06.057 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.057 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:06.057 # We can assume this is a mount from a host where container is running, 00:02:06.057 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.057 container="$(< /etc/hostname) ($agent)" 00:02:06.057 else 00:02:06.057 # Fallback 00:02:06.057 container=$agent 00:02:06.057 fi 00:02:06.057 fi 00:02:06.057 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.057 00:02:06.328 [Pipeline] } 00:02:06.344 [Pipeline] // withEnv 00:02:06.352 [Pipeline] setCustomBuildProperty 00:02:06.367 [Pipeline] stage 00:02:06.370 [Pipeline] { (Tests) 00:02:06.386 [Pipeline] sh 00:02:06.663 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.940 [Pipeline] sh 00:02:07.220 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:07.493 [Pipeline] timeout 00:02:07.494 Timeout set to expire in 1 hr 0 min 00:02:07.496 [Pipeline] { 00:02:07.514 [Pipeline] sh 00:02:07.798 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.364 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:08.374 [Pipeline] sh 00:02:08.651 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.924 [Pipeline] sh 00:02:09.203 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.478 [Pipeline] sh 00:02:09.758 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:09.759 ++ readlink -f spdk_repo 00:02:09.759 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.759 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.759 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.759 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.759 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.759 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.759 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.759 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:09.759 + cd /home/vagrant/spdk_repo 00:02:09.759 + source /etc/os-release 00:02:09.759 ++ NAME='Fedora Linux' 00:02:09.759 ++ VERSION='39 (Cloud Edition)' 00:02:09.759 ++ ID=fedora 00:02:09.759 ++ VERSION_ID=39 00:02:09.759 ++ VERSION_CODENAME= 00:02:09.759 ++ PLATFORM_ID=platform:f39 00:02:09.759 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.759 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.759 ++ LOGO=fedora-logo-icon 00:02:09.759 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.759 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.759 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.759 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.759 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.759 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.759 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.759 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.759 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.759 ++ SUPPORT_END=2024-11-12 00:02:09.759 ++ VARIANT='Cloud Edition' 00:02:09.759 ++ VARIANT_ID=cloud 00:02:09.759 + uname -a 00:02:09.759 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.759 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:10.018 Hugepages 00:02:10.018 node hugesize free / total 00:02:10.018 node0 1048576kB 0 / 0 00:02:10.018 node0 2048kB 0 / 0 00:02:10.018 00:02:10.018 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.018 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.018 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:10.018 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:10.018 + rm -f /tmp/spdk-ld-path 00:02:10.018 + source autorun-spdk.conf 00:02:10.018 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.018 ++ SPDK_TEST_NVMF=1 00:02:10.018 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.018 ++ SPDK_TEST_URING=1 00:02:10.018 ++ SPDK_TEST_USDT=1 00:02:10.018 ++ SPDK_RUN_UBSAN=1 00:02:10.018 ++ NET_TYPE=virt 00:02:10.018 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.018 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.018 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.018 ++ RUN_NIGHTLY=1 00:02:10.018 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.018 + [[ -n '' ]] 00:02:10.018 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.018 + for M in /var/spdk/build-*-manifest.txt 00:02:10.018 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.018 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.018 + for M in /var/spdk/build-*-manifest.txt 00:02:10.018 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.018 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.018 + for M in /var/spdk/build-*-manifest.txt 00:02:10.018 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.018 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.277 ++ uname 00:02:10.277 + [[ Linux == \L\i\n\u\x ]] 00:02:10.277 + sudo dmesg -T 00:02:10.277 + sudo dmesg --clear 00:02:10.277 + dmesg_pid=5965 00:02:10.277 + [[ Fedora Linux == FreeBSD ]] 00:02:10.277 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.277 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.277 + sudo dmesg -Tw 00:02:10.277 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.277 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.277 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.277 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.277 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.277 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.277 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.277 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.277 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.278 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.278 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.278 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.278 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.278 Test configuration: 00:02:10.278 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.278 SPDK_TEST_NVMF=1 00:02:10.278 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.278 SPDK_TEST_URING=1 00:02:10.278 SPDK_TEST_USDT=1 00:02:10.278 SPDK_RUN_UBSAN=1 00:02:10.278 NET_TYPE=virt 00:02:10.278 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.278 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.278 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.278 RUN_NIGHTLY=1 21:09:33 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:10.278 21:09:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.278 21:09:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.278 21:09:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.278 21:09:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.278 21:09:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.278 21:09:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.278 21:09:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.278 21:09:33 -- paths/export.sh@5 -- $ export PATH 00:02:10.278 21:09:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.278 21:09:33 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.278 21:09:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:10.278 21:09:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732828173.XXXXXX 00:02:10.278 21:09:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732828173.92wLLz 00:02:10.278 21:09:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:10.278 21:09:33 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:10.278 21:09:33 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.278 21:09:33 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:10.278 21:09:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.278 21:09:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.278 21:09:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:10.278 21:09:33 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:10.278 21:09:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.278 21:09:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:10.278 21:09:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.278 21:09:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.278 21:09:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.278 21:09:33 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.278 Thu Nov 28 09:09:33 PM UTC 2024 00:02:10.278 21:09:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.278 LTS-67-gc13c99a5e 00:02:10.278 21:09:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.278 21:09:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.278 21:09:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.278 21:09:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:10.278 21:09:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:10.278 21:09:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.278 ************************************ 00:02:10.278 START TEST ubsan 00:02:10.278 ************************************ 00:02:10.278 using ubsan 00:02:10.278 21:09:33 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:10.278 00:02:10.278 real 0m0.000s 00:02:10.278 user 0m0.000s 00:02:10.278 sys 0m0.000s 00:02:10.278 21:09:33 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:10.278 21:09:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.278 ************************************ 00:02:10.278 END TEST ubsan 00:02:10.278 ************************************ 00:02:10.278 21:09:34 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:10.278 21:09:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:10.278 21:09:34 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:10.278 21:09:34 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:10.278 21:09:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:10.278 21:09:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.278 ************************************ 00:02:10.278 START TEST build_native_dpdk 00:02:10.278 ************************************ 00:02:10.278 21:09:34 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:10.278 21:09:34 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:10.278 21:09:34 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:10.278 21:09:34 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:10.278 21:09:34 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:10.278 21:09:34 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:10.278 21:09:34 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:10.278 21:09:34 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:10.278 21:09:34 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:10.278 21:09:34 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:10.278 21:09:34 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:10.278 21:09:34 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:10.278 21:09:34 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:10.537 21:09:34 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:10.537 21:09:34 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:10.537 21:09:34 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:10.537 21:09:34 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.537 21:09:34 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:10.537 21:09:34 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:10.537 21:09:34 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:10.537 21:09:34 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:10.537 caf0f5d395 version: 22.11.4 00:02:10.537 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:10.537 dc9c799c7d vhost: fix missing spinlock unlock 00:02:10.537 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:10.537 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:10.537 21:09:34 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:10.537 21:09:34 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:10.537 21:09:34 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:10.537 21:09:34 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:10.537 21:09:34 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:10.538 21:09:34 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:10.538 21:09:34 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:10.538 21:09:34 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:10.538 21:09:34 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:10.538 21:09:34 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:10.538 21:09:34 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:10.538 21:09:34 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.538 21:09:34 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.538 21:09:34 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:10.538 21:09:34 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:10.538 21:09:34 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:10.538 21:09:34 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:10.538 21:09:34 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:10.538 21:09:34 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:10.538 21:09:34 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:10.538 21:09:34 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:10.538 21:09:34 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:10.538 21:09:34 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:10.538 21:09:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:10.538 21:09:34 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:10.538 21:09:34 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:10.538 21:09:34 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:10.538 21:09:34 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:10.538 21:09:34 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:10.538 21:09:34 -- scripts/common.sh@343 -- $ case "$op" in 00:02:10.538 21:09:34 -- scripts/common.sh@344 -- $ : 1 00:02:10.538 21:09:34 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:10.538 21:09:34 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.538 21:09:34 -- scripts/common.sh@364 -- $ decimal 22 00:02:10.538 21:09:34 -- scripts/common.sh@352 -- $ local d=22 00:02:10.538 21:09:34 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:10.538 21:09:34 -- scripts/common.sh@354 -- $ echo 22 00:02:10.538 21:09:34 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:10.538 21:09:34 -- scripts/common.sh@365 -- $ decimal 21 00:02:10.538 21:09:34 -- scripts/common.sh@352 -- $ local d=21 00:02:10.538 21:09:34 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:10.538 21:09:34 -- scripts/common.sh@354 -- $ echo 21 00:02:10.538 21:09:34 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:10.538 21:09:34 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:10.538 21:09:34 -- scripts/common.sh@366 -- $ return 1 00:02:10.538 21:09:34 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:10.538 patching file config/rte_config.h 00:02:10.538 Hunk #1 succeeded at 60 (offset 1 line). 00:02:10.538 21:09:34 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:10.538 21:09:34 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:10.538 21:09:34 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:10.538 21:09:34 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:10.538 21:09:34 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:10.538 21:09:34 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:10.538 21:09:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:10.538 21:09:34 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:10.538 21:09:34 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:10.538 21:09:34 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:10.538 21:09:34 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:10.538 21:09:34 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:10.538 21:09:34 -- scripts/common.sh@343 -- $ case "$op" in 00:02:10.538 21:09:34 -- scripts/common.sh@344 -- $ : 1 00:02:10.538 21:09:34 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:10.538 21:09:34 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.538 21:09:34 -- scripts/common.sh@364 -- $ decimal 22 00:02:10.538 21:09:34 -- scripts/common.sh@352 -- $ local d=22 00:02:10.538 21:09:34 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:10.538 21:09:34 -- scripts/common.sh@354 -- $ echo 22 00:02:10.538 21:09:34 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:10.538 21:09:34 -- scripts/common.sh@365 -- $ decimal 24 00:02:10.538 21:09:34 -- scripts/common.sh@352 -- $ local d=24 00:02:10.538 21:09:34 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:10.538 21:09:34 -- scripts/common.sh@354 -- $ echo 24 00:02:10.538 21:09:34 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:10.538 21:09:34 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:10.538 21:09:34 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:10.538 21:09:34 -- scripts/common.sh@367 -- $ return 0 00:02:10.538 21:09:34 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:10.538 patching file lib/pcapng/rte_pcapng.c 00:02:10.538 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:10.538 21:09:34 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:10.538 21:09:34 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:10.538 21:09:34 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:10.538 21:09:34 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:10.538 21:09:34 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.834 The Meson build system 00:02:15.834 Version: 1.5.0 00:02:15.834 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:15.834 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:15.834 Build type: native build 00:02:15.834 Program cat found: YES (/usr/bin/cat) 00:02:15.834 Project name: DPDK 00:02:15.834 Project version: 22.11.4 00:02:15.834 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.834 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:15.834 Host machine cpu family: x86_64 00:02:15.834 Host machine cpu: x86_64 00:02:15.834 Message: ## Building in Developer Mode ## 00:02:15.834 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.834 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:15.834 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.834 Program objdump found: YES (/usr/bin/objdump) 00:02:15.834 Program python3 found: YES (/usr/bin/python3) 00:02:15.834 Program cat found: YES (/usr/bin/cat) 00:02:15.834 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:15.834 Checking for size of "void *" : 8 00:02:15.834 Checking for size of "void *" : 8 (cached) 00:02:15.834 Library m found: YES 00:02:15.834 Library numa found: YES 00:02:15.834 Has header "numaif.h" : YES 00:02:15.834 Library fdt found: NO 00:02:15.834 Library execinfo found: NO 00:02:15.834 Has header "execinfo.h" : YES 00:02:15.834 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.834 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.834 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.834 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.834 Run-time dependency openssl found: YES 3.1.1 00:02:15.834 Run-time dependency libpcap found: YES 1.10.4 00:02:15.834 Has header "pcap.h" with dependency libpcap: YES 00:02:15.834 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.834 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.834 Compiler for C supports arguments -Wformat: YES 00:02:15.834 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.834 Compiler for C supports arguments -Wformat-security: NO 00:02:15.834 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.834 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.834 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.834 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.834 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.834 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.834 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.834 Compiler for C supports arguments -Wundef: YES 00:02:15.834 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.834 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.834 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.834 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.834 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.834 Compiler for C supports arguments -mavx512f: YES 00:02:15.834 Checking if "AVX512 checking" compiles: YES 00:02:15.834 Fetching value of define "__SSE4_2__" : 1 00:02:15.834 Fetching value of define "__AES__" : 1 00:02:15.834 Fetching value of define "__AVX__" : 1 00:02:15.834 Fetching value of define "__AVX2__" : 1 00:02:15.834 Fetching value of define "__AVX512BW__" : (undefined) 00:02:15.834 Fetching value of define "__AVX512CD__" : (undefined) 00:02:15.834 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:15.834 Fetching value of define "__AVX512F__" : (undefined) 00:02:15.834 Fetching value of define "__AVX512VL__" : (undefined) 00:02:15.834 Fetching value of define "__PCLMUL__" : 1 00:02:15.834 Fetching value of define "__RDRND__" : 1 00:02:15.834 Fetching value of define "__RDSEED__" : 1 00:02:15.834 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.834 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.834 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.834 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.834 Checking for function "getentropy" : YES 00:02:15.834 Message: lib/eal: Defining dependency "eal" 00:02:15.834 Message: lib/ring: Defining dependency "ring" 00:02:15.834 Message: lib/rcu: Defining dependency "rcu" 00:02:15.834 Message: lib/mempool: Defining dependency "mempool" 00:02:15.834 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.834 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.834 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.834 Compiler for C supports arguments -mpclmul: YES 00:02:15.834 Compiler for C supports arguments -maes: YES 00:02:15.834 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.834 Compiler for C supports arguments -mavx512bw: YES 00:02:15.834 Compiler for C supports arguments -mavx512dq: YES 00:02:15.834 Compiler for C supports arguments -mavx512vl: YES 00:02:15.834 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.834 Compiler for C supports arguments -mavx2: YES 00:02:15.834 Compiler for C supports arguments -mavx: YES 00:02:15.834 Message: lib/net: Defining dependency "net" 00:02:15.834 Message: lib/meter: Defining dependency "meter" 00:02:15.834 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.834 Message: lib/pci: Defining dependency "pci" 00:02:15.834 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.834 Message: lib/metrics: Defining dependency "metrics" 00:02:15.834 Message: lib/hash: Defining dependency "hash" 00:02:15.834 Message: lib/timer: Defining dependency "timer" 00:02:15.834 Fetching value of define "__AVX2__" : 1 (cached) 00:02:15.834 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.834 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:15.834 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:15.834 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:15.834 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:15.834 Message: lib/acl: Defining dependency "acl" 00:02:15.834 Message: lib/bbdev: Defining dependency "bbdev" 00:02:15.834 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:15.834 Run-time dependency libelf found: YES 0.191 00:02:15.834 Message: lib/bpf: Defining dependency "bpf" 00:02:15.834 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:15.834 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.834 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.834 Message: lib/distributor: Defining dependency "distributor" 00:02:15.834 Message: lib/efd: Defining dependency "efd" 00:02:15.834 Message: lib/eventdev: Defining dependency "eventdev" 00:02:15.834 Message: lib/gpudev: Defining dependency "gpudev" 00:02:15.834 Message: lib/gro: Defining dependency "gro" 00:02:15.834 Message: lib/gso: Defining dependency "gso" 00:02:15.834 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:15.834 Message: lib/jobstats: Defining dependency "jobstats" 00:02:15.834 Message: lib/latencystats: Defining dependency "latencystats" 00:02:15.834 Message: lib/lpm: Defining dependency "lpm" 00:02:15.834 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.834 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:15.834 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:15.834 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:15.834 Message: lib/member: Defining dependency "member" 00:02:15.834 Message: lib/pcapng: Defining dependency "pcapng" 00:02:15.834 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.834 Message: lib/power: Defining dependency "power" 00:02:15.834 Message: lib/rawdev: Defining dependency "rawdev" 00:02:15.834 Message: lib/regexdev: Defining dependency "regexdev" 00:02:15.834 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.834 Message: lib/rib: Defining dependency "rib" 00:02:15.835 Message: lib/reorder: Defining dependency "reorder" 00:02:15.835 Message: lib/sched: Defining dependency "sched" 00:02:15.835 Message: lib/security: Defining dependency "security" 00:02:15.835 Message: lib/stack: Defining dependency "stack" 00:02:15.835 Has header "linux/userfaultfd.h" : YES 00:02:15.835 Message: lib/vhost: Defining dependency "vhost" 00:02:15.835 Message: lib/ipsec: Defining dependency "ipsec" 00:02:15.835 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.835 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:15.835 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:15.835 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.835 Message: lib/fib: Defining dependency "fib" 00:02:15.835 Message: lib/port: Defining dependency "port" 00:02:15.835 Message: lib/pdump: Defining dependency "pdump" 00:02:15.835 Message: lib/table: Defining dependency "table" 00:02:15.835 Message: lib/pipeline: Defining dependency "pipeline" 00:02:15.835 Message: lib/graph: Defining dependency "graph" 00:02:15.835 Message: lib/node: Defining dependency "node" 00:02:15.835 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.835 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.835 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.835 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.835 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:15.835 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.835 Compiler for C supports arguments -Wno-format: YES 00:02:15.835 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.835 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:17.738 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:17.738 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:17.738 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:17.738 Fetching value of define "__AVX2__" : 1 (cached) 00:02:17.738 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.738 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.738 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:17.738 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:17.738 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:17.738 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.738 Configuring doxy-api.conf using configuration 00:02:17.738 Program sphinx-build found: NO 00:02:17.738 Configuring rte_build_config.h using configuration 00:02:17.738 Message: 00:02:17.738 ================= 00:02:17.738 Applications Enabled 00:02:17.738 ================= 00:02:17.738 00:02:17.738 apps: 00:02:17.738 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:17.738 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:17.738 test-security-perf, 00:02:17.738 00:02:17.738 Message: 00:02:17.738 ================= 00:02:17.738 Libraries Enabled 00:02:17.738 ================= 00:02:17.738 00:02:17.738 libs: 00:02:17.738 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:17.738 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:17.738 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:17.738 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:17.738 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:17.738 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:17.738 table, pipeline, graph, node, 00:02:17.738 00:02:17.738 Message: 00:02:17.738 =============== 00:02:17.738 Drivers Enabled 00:02:17.738 =============== 00:02:17.738 00:02:17.738 common: 00:02:17.738 00:02:17.738 bus: 00:02:17.738 pci, vdev, 00:02:17.738 mempool: 00:02:17.738 ring, 00:02:17.738 dma: 00:02:17.738 00:02:17.738 net: 00:02:17.738 i40e, 00:02:17.738 raw: 00:02:17.738 00:02:17.738 crypto: 00:02:17.738 00:02:17.738 compress: 00:02:17.738 00:02:17.738 regex: 00:02:17.738 00:02:17.738 vdpa: 00:02:17.738 00:02:17.738 event: 00:02:17.738 00:02:17.738 baseband: 00:02:17.738 00:02:17.738 gpu: 00:02:17.738 00:02:17.738 00:02:17.738 Message: 00:02:17.738 ================= 00:02:17.738 Content Skipped 00:02:17.738 ================= 00:02:17.738 00:02:17.738 apps: 00:02:17.738 00:02:17.738 libs: 00:02:17.738 kni: explicitly disabled via build config (deprecated lib) 00:02:17.738 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:17.738 00:02:17.738 drivers: 00:02:17.738 common/cpt: not in enabled drivers build config 00:02:17.738 common/dpaax: not in enabled drivers build config 00:02:17.738 common/iavf: not in enabled drivers build config 00:02:17.738 common/idpf: not in enabled drivers build config 00:02:17.738 common/mvep: not in enabled drivers build config 00:02:17.738 common/octeontx: not in enabled drivers build config 00:02:17.738 bus/auxiliary: not in enabled drivers build config 00:02:17.738 bus/dpaa: not in enabled drivers build config 00:02:17.738 bus/fslmc: not in enabled drivers build config 00:02:17.738 bus/ifpga: not in enabled drivers build config 00:02:17.738 bus/vmbus: not in enabled drivers build config 00:02:17.738 common/cnxk: not in enabled drivers build config 00:02:17.738 common/mlx5: not in enabled drivers build config 00:02:17.738 common/qat: not in enabled drivers build config 00:02:17.738 common/sfc_efx: not in enabled drivers build config 00:02:17.738 mempool/bucket: not in enabled drivers build config 00:02:17.738 mempool/cnxk: not in enabled drivers build config 00:02:17.738 mempool/dpaa: not in enabled drivers build config 00:02:17.738 mempool/dpaa2: not in enabled drivers build config 00:02:17.738 mempool/octeontx: not in enabled drivers build config 00:02:17.738 mempool/stack: not in enabled drivers build config 00:02:17.738 dma/cnxk: not in enabled drivers build config 00:02:17.738 dma/dpaa: not in enabled drivers build config 00:02:17.738 dma/dpaa2: not in enabled drivers build config 00:02:17.738 dma/hisilicon: not in enabled drivers build config 00:02:17.738 dma/idxd: not in enabled drivers build config 00:02:17.738 dma/ioat: not in enabled drivers build config 00:02:17.738 dma/skeleton: not in enabled drivers build config 00:02:17.738 net/af_packet: not in enabled drivers build config 00:02:17.738 net/af_xdp: not in enabled drivers build config 00:02:17.738 net/ark: not in enabled drivers build config 00:02:17.738 net/atlantic: not in enabled drivers build config 00:02:17.738 net/avp: not in enabled drivers build config 00:02:17.738 net/axgbe: not in enabled drivers build config 00:02:17.738 net/bnx2x: not in enabled drivers build config 00:02:17.738 net/bnxt: not in enabled drivers build config 00:02:17.738 net/bonding: not in enabled drivers build config 00:02:17.738 net/cnxk: not in enabled drivers build config 00:02:17.738 net/cxgbe: not in enabled drivers build config 00:02:17.738 net/dpaa: not in enabled drivers build config 00:02:17.738 net/dpaa2: not in enabled drivers build config 00:02:17.738 net/e1000: not in enabled drivers build config 00:02:17.738 net/ena: not in enabled drivers build config 00:02:17.738 net/enetc: not in enabled drivers build config 00:02:17.738 net/enetfec: not in enabled drivers build config 00:02:17.738 net/enic: not in enabled drivers build config 00:02:17.738 net/failsafe: not in enabled drivers build config 00:02:17.738 net/fm10k: not in enabled drivers build config 00:02:17.738 net/gve: not in enabled drivers build config 00:02:17.738 net/hinic: not in enabled drivers build config 00:02:17.738 net/hns3: not in enabled drivers build config 00:02:17.738 net/iavf: not in enabled drivers build config 00:02:17.738 net/ice: not in enabled drivers build config 00:02:17.738 net/idpf: not in enabled drivers build config 00:02:17.738 net/igc: not in enabled drivers build config 00:02:17.738 net/ionic: not in enabled drivers build config 00:02:17.738 net/ipn3ke: not in enabled drivers build config 00:02:17.738 net/ixgbe: not in enabled drivers build config 00:02:17.738 net/kni: not in enabled drivers build config 00:02:17.738 net/liquidio: not in enabled drivers build config 00:02:17.738 net/mana: not in enabled drivers build config 00:02:17.738 net/memif: not in enabled drivers build config 00:02:17.738 net/mlx4: not in enabled drivers build config 00:02:17.738 net/mlx5: not in enabled drivers build config 00:02:17.738 net/mvneta: not in enabled drivers build config 00:02:17.738 net/mvpp2: not in enabled drivers build config 00:02:17.738 net/netvsc: not in enabled drivers build config 00:02:17.738 net/nfb: not in enabled drivers build config 00:02:17.738 net/nfp: not in enabled drivers build config 00:02:17.738 net/ngbe: not in enabled drivers build config 00:02:17.738 net/null: not in enabled drivers build config 00:02:17.738 net/octeontx: not in enabled drivers build config 00:02:17.738 net/octeon_ep: not in enabled drivers build config 00:02:17.738 net/pcap: not in enabled drivers build config 00:02:17.738 net/pfe: not in enabled drivers build config 00:02:17.738 net/qede: not in enabled drivers build config 00:02:17.738 net/ring: not in enabled drivers build config 00:02:17.738 net/sfc: not in enabled drivers build config 00:02:17.738 net/softnic: not in enabled drivers build config 00:02:17.738 net/tap: not in enabled drivers build config 00:02:17.739 net/thunderx: not in enabled drivers build config 00:02:17.739 net/txgbe: not in enabled drivers build config 00:02:17.739 net/vdev_netvsc: not in enabled drivers build config 00:02:17.739 net/vhost: not in enabled drivers build config 00:02:17.739 net/virtio: not in enabled drivers build config 00:02:17.739 net/vmxnet3: not in enabled drivers build config 00:02:17.739 raw/cnxk_bphy: not in enabled drivers build config 00:02:17.739 raw/cnxk_gpio: not in enabled drivers build config 00:02:17.739 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:17.739 raw/ifpga: not in enabled drivers build config 00:02:17.739 raw/ntb: not in enabled drivers build config 00:02:17.739 raw/skeleton: not in enabled drivers build config 00:02:17.739 crypto/armv8: not in enabled drivers build config 00:02:17.739 crypto/bcmfs: not in enabled drivers build config 00:02:17.739 crypto/caam_jr: not in enabled drivers build config 00:02:17.739 crypto/ccp: not in enabled drivers build config 00:02:17.739 crypto/cnxk: not in enabled drivers build config 00:02:17.739 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.739 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.739 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.739 crypto/mlx5: not in enabled drivers build config 00:02:17.739 crypto/mvsam: not in enabled drivers build config 00:02:17.739 crypto/nitrox: not in enabled drivers build config 00:02:17.739 crypto/null: not in enabled drivers build config 00:02:17.739 crypto/octeontx: not in enabled drivers build config 00:02:17.739 crypto/openssl: not in enabled drivers build config 00:02:17.739 crypto/scheduler: not in enabled drivers build config 00:02:17.739 crypto/uadk: not in enabled drivers build config 00:02:17.739 crypto/virtio: not in enabled drivers build config 00:02:17.739 compress/isal: not in enabled drivers build config 00:02:17.739 compress/mlx5: not in enabled drivers build config 00:02:17.739 compress/octeontx: not in enabled drivers build config 00:02:17.739 compress/zlib: not in enabled drivers build config 00:02:17.739 regex/mlx5: not in enabled drivers build config 00:02:17.739 regex/cn9k: not in enabled drivers build config 00:02:17.739 vdpa/ifc: not in enabled drivers build config 00:02:17.739 vdpa/mlx5: not in enabled drivers build config 00:02:17.739 vdpa/sfc: not in enabled drivers build config 00:02:17.739 event/cnxk: not in enabled drivers build config 00:02:17.739 event/dlb2: not in enabled drivers build config 00:02:17.739 event/dpaa: not in enabled drivers build config 00:02:17.739 event/dpaa2: not in enabled drivers build config 00:02:17.739 event/dsw: not in enabled drivers build config 00:02:17.739 event/opdl: not in enabled drivers build config 00:02:17.739 event/skeleton: not in enabled drivers build config 00:02:17.739 event/sw: not in enabled drivers build config 00:02:17.739 event/octeontx: not in enabled drivers build config 00:02:17.739 baseband/acc: not in enabled drivers build config 00:02:17.739 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:17.739 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:17.739 baseband/la12xx: not in enabled drivers build config 00:02:17.739 baseband/null: not in enabled drivers build config 00:02:17.739 baseband/turbo_sw: not in enabled drivers build config 00:02:17.739 gpu/cuda: not in enabled drivers build config 00:02:17.739 00:02:17.739 00:02:17.739 Build targets in project: 314 00:02:17.739 00:02:17.739 DPDK 22.11.4 00:02:17.739 00:02:17.739 User defined options 00:02:17.739 libdir : lib 00:02:17.739 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:17.739 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:17.739 c_link_args : 00:02:17.739 enable_docs : false 00:02:17.739 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.739 enable_kmods : false 00:02:17.739 machine : native 00:02:17.739 tests : false 00:02:17.739 00:02:17.739 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.739 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:17.739 21:09:41 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:17.739 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.739 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:17.739 [2/743] Generating lib/rte_kvargs_def with a custom command 00:02:17.739 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:17.739 [4/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:17.739 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.739 [6/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.739 [7/743] Linking static target lib/librte_kvargs.a 00:02:17.739 [8/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.739 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.739 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.739 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.739 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.739 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.739 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.998 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.998 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.998 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.998 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.998 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.998 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.998 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:17.998 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.998 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:17.998 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.998 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.256 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.256 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.256 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.256 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.256 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.256 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.256 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.256 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.256 [34/743] Linking static target lib/librte_telemetry.a 00:02:18.256 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.515 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.515 [37/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:18.515 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.515 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.515 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.515 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.515 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.515 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.774 [44/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.774 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.774 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:18.774 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.774 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.774 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.774 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:18.774 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.774 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.774 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.774 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.774 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.033 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.033 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.033 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.033 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.033 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.033 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.033 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.033 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.033 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.033 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:19.033 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.033 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.033 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.033 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.292 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.292 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.292 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.292 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.292 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.292 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.292 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.292 [77/743] Generating lib/rte_eal_def with a custom command 00:02:19.292 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:19.292 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.292 [80/743] Generating lib/rte_ring_def with a custom command 00:02:19.292 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.292 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:19.292 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:19.292 [84/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.292 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.292 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:19.551 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.551 [88/743] Linking static target lib/librte_ring.a 00:02:19.551 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.551 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:19.551 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:19.551 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.551 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.884 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.884 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.884 [96/743] Linking static target lib/librte_eal.a 00:02:19.884 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.148 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:20.148 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.148 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:20.148 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.148 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.148 [103/743] Linking static target lib/librte_rcu.a 00:02:20.148 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.148 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.406 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.406 [107/743] Linking static target lib/librte_mempool.a 00:02:20.406 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.406 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:20.664 [110/743] Generating lib/rte_net_def with a custom command 00:02:20.664 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:20.664 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:20.665 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:20.665 [114/743] Generating lib/rte_meter_def with a custom command 00:02:20.665 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:20.665 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:20.665 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:20.665 [118/743] Linking static target lib/librte_meter.a 00:02:20.923 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.923 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:20.923 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:20.923 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.181 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.181 [124/743] Linking static target lib/librte_mbuf.a 00:02:21.181 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.181 [126/743] Linking static target lib/librte_net.a 00:02:21.181 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.440 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.440 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.440 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:21.440 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.707 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.707 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.707 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.965 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.222 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.222 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:22.222 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:22.222 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.222 [140/743] Generating lib/rte_pci_def with a custom command 00:02:22.480 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.480 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:22.480 [143/743] Linking static target lib/librte_pci.a 00:02:22.480 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.480 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.480 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.480 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.480 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.480 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.480 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:22.738 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.738 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.738 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.738 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.738 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.738 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.738 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:22.738 [158/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.738 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.738 [160/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:22.738 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:22.738 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:22.996 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.996 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:22.996 [165/743] Generating lib/rte_hash_def with a custom command 00:02:22.996 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.996 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:22.996 [168/743] Generating lib/rte_timer_def with a custom command 00:02:22.996 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:22.996 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.996 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.996 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.996 [173/743] Linking static target lib/librte_cmdline.a 00:02:23.563 [174/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.563 [175/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:23.563 [176/743] Linking static target lib/librte_timer.a 00:02:23.563 [177/743] Linking static target lib/librte_metrics.a 00:02:23.821 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.821 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.079 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.079 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:24.079 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.079 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.079 [184/743] Linking static target lib/librte_ethdev.a 00:02:24.646 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.646 [186/743] Generating lib/rte_acl_def with a custom command 00:02:24.646 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:24.646 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.646 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:24.646 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:24.646 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.646 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:24.646 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:25.213 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:25.213 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:25.213 [196/743] Linking static target lib/librte_bitratestats.a 00:02:25.472 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:25.472 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.472 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:25.472 [200/743] Linking static target lib/librte_bbdev.a 00:02:25.730 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:25.989 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.989 [203/743] Linking static target lib/librte_hash.a 00:02:25.989 [204/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:25.989 [205/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:26.248 [206/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.248 [207/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:26.248 [208/743] Linking static target lib/acl/libavx512_tmp.a 00:02:26.248 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:26.816 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.816 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:26.816 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:26.816 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:26.816 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:26.816 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:26.816 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:26.816 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:26.816 [218/743] Linking static target lib/librte_acl.a 00:02:26.816 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:27.073 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:27.073 [221/743] Linking static target lib/librte_cfgfile.a 00:02:27.073 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:27.073 [223/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.073 [224/743] Generating lib/rte_compressdev_def with a custom command 00:02:27.073 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:27.073 [226/743] Linking target lib/librte_eal.so.23.0 00:02:27.073 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.331 [228/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:27.331 [229/743] Linking target lib/librte_ring.so.23.0 00:02:27.331 [230/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.331 [231/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.331 [232/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:27.331 [233/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:27.331 [234/743] Linking target lib/librte_meter.so.23.0 00:02:27.331 [235/743] Linking target lib/librte_pci.so.23.0 00:02:27.331 [236/743] Linking target lib/librte_timer.so.23.0 00:02:27.331 [237/743] Linking target lib/librte_rcu.so.23.0 00:02:27.331 [238/743] Linking target lib/librte_mempool.so.23.0 00:02:27.590 [239/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:27.590 [240/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:27.590 [241/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:27.590 [242/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:27.590 [243/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.590 [244/743] Linking static target lib/librte_bpf.a 00:02:27.590 [245/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:27.590 [246/743] Linking target lib/librte_acl.so.23.0 00:02:27.590 [247/743] Linking target lib/librte_cfgfile.so.23.0 00:02:27.590 [248/743] Linking target lib/librte_mbuf.so.23.0 00:02:27.590 [249/743] Generating lib/rte_cryptodev_def with a custom command 00:02:27.590 [250/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:27.590 [251/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:27.590 [252/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.848 [253/743] Linking static target lib/librte_compressdev.a 00:02:27.848 [254/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:27.848 [255/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:27.848 [256/743] Linking target lib/librte_net.so.23.0 00:02:27.848 [257/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.848 [258/743] Linking target lib/librte_bbdev.so.23.0 00:02:27.848 [259/743] Generating lib/rte_distributor_def with a custom command 00:02:27.848 [260/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.848 [261/743] Generating lib/rte_distributor_mingw with a custom command 00:02:27.848 [262/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:27.848 [263/743] Generating lib/rte_efd_def with a custom command 00:02:28.107 [264/743] Linking target lib/librte_cmdline.so.23.0 00:02:28.107 [265/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:28.107 [266/743] Linking target lib/librte_hash.so.23.0 00:02:28.107 [267/743] Generating lib/rte_efd_mingw with a custom command 00:02:28.107 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:28.365 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:28.365 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:28.623 [271/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.623 [272/743] Linking target lib/librte_compressdev.so.23.0 00:02:28.623 [273/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:28.623 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.623 [275/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:28.623 [276/743] Linking static target lib/librte_distributor.a 00:02:28.881 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:28.881 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:28.881 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:28.881 [280/743] Linking target lib/librte_metrics.so.23.0 00:02:28.881 [281/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.140 [282/743] Linking target lib/librte_bpf.so.23.0 00:02:29.140 [283/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:29.140 [284/743] Linking target lib/librte_bitratestats.so.23.0 00:02:29.140 [285/743] Linking target lib/librte_distributor.so.23.0 00:02:29.140 [286/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:29.140 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:29.140 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:29.140 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:29.140 [290/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:29.140 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:29.398 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:29.398 [293/743] Linking static target lib/librte_efd.a 00:02:29.657 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:29.657 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.657 [296/743] Linking target lib/librte_efd.so.23.0 00:02:29.916 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:29.916 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:29.916 [299/743] Generating lib/rte_gro_def with a custom command 00:02:29.916 [300/743] Generating lib/rte_gro_mingw with a custom command 00:02:29.916 [301/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.175 [302/743] Linking static target lib/librte_cryptodev.a 00:02:30.175 [303/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:30.175 [304/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:30.175 [305/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:30.175 [306/743] Linking static target lib/librte_gpudev.a 00:02:30.175 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:30.433 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:30.692 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:30.692 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:30.692 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:30.692 [312/743] Generating lib/rte_gso_def with a custom command 00:02:30.692 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:30.692 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:30.692 [315/743] Linking static target lib/librte_gro.a 00:02:30.951 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.951 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:30.951 [318/743] Linking target lib/librte_gpudev.so.23.0 00:02:30.951 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:30.951 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.951 [321/743] Linking target lib/librte_gro.so.23.0 00:02:31.209 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:31.209 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:31.209 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:31.209 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:31.467 [326/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:31.467 [327/743] Linking static target lib/librte_gso.a 00:02:31.467 [328/743] Linking static target lib/librte_eventdev.a 00:02:31.467 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:31.467 [330/743] Linking static target lib/librte_jobstats.a 00:02:31.467 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:31.467 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:31.467 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.467 [334/743] Linking target lib/librte_gso.so.23.0 00:02:31.725 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:31.725 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:31.725 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:31.725 [338/743] Generating lib/rte_latencystats_def with a custom command 00:02:31.725 [339/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:31.725 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:31.725 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:31.725 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:31.725 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.725 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:31.725 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:31.982 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:31.982 [347/743] Linking static target lib/librte_ip_frag.a 00:02:32.240 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.240 [349/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.240 [350/743] Linking target lib/librte_cryptodev.so.23.0 00:02:32.240 [351/743] Linking target lib/librte_ip_frag.so.23.0 00:02:32.498 [352/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:32.498 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:32.498 [354/743] Linking static target lib/librte_latencystats.a 00:02:32.498 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:32.498 [356/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:32.498 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:32.498 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:32.498 [359/743] Generating lib/rte_member_def with a custom command 00:02:32.498 [360/743] Generating lib/rte_member_mingw with a custom command 00:02:32.498 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:32.498 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:32.498 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:32.757 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.757 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.757 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:32.757 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.757 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.015 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:33.015 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:33.015 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:33.274 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:33.274 [373/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:33.274 [374/743] Generating lib/rte_power_def with a custom command 00:02:33.274 [375/743] Linking static target lib/librte_lpm.a 00:02:33.274 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:33.274 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.274 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.274 [379/743] Generating lib/rte_rawdev_def with a custom command 00:02:33.532 [380/743] Linking target lib/librte_eventdev.so.23.0 00:02:33.532 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:33.532 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:33.532 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:33.532 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:33.532 [385/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:33.532 [386/743] Linking static target lib/librte_pcapng.a 00:02:33.532 [387/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.532 [388/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:33.532 [389/743] Generating lib/rte_dmadev_def with a custom command 00:02:33.532 [390/743] Linking target lib/librte_lpm.so.23.0 00:02:33.532 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:33.532 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:33.532 [393/743] Linking static target lib/librte_rawdev.a 00:02:33.532 [394/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:33.791 [395/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:33.791 [396/743] Generating lib/rte_rib_def with a custom command 00:02:33.791 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:33.791 [398/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:33.791 [399/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.791 [400/743] Generating lib/rte_reorder_def with a custom command 00:02:33.791 [401/743] Generating lib/rte_reorder_mingw with a custom command 00:02:33.791 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:34.050 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.050 [404/743] Linking static target lib/librte_dmadev.a 00:02:34.050 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:34.050 [406/743] Linking static target lib/librte_power.a 00:02:34.050 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:34.050 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.050 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:34.050 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:34.050 [411/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:34.308 [412/743] Linking static target lib/librte_member.a 00:02:34.308 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:34.308 [414/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:34.308 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:34.308 [416/743] Linking static target lib/librte_regexdev.a 00:02:34.308 [417/743] Generating lib/rte_sched_def with a custom command 00:02:34.308 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:34.308 [419/743] Generating lib/rte_security_def with a custom command 00:02:34.308 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:34.567 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.567 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:34.567 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:34.567 [424/743] Linking target lib/librte_dmadev.so.23.0 00:02:34.567 [425/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.567 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:34.567 [427/743] Generating lib/rte_stack_def with a custom command 00:02:34.567 [428/743] Linking target lib/librte_member.so.23.0 00:02:34.567 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:34.567 [430/743] Linking static target lib/librte_stack.a 00:02:34.567 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:34.567 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:34.567 [433/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:34.567 [434/743] Linking static target lib/librte_reorder.a 00:02:34.826 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.826 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.826 [437/743] Linking target lib/librte_stack.so.23.0 00:02:34.826 [438/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.826 [439/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.826 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:34.826 [441/743] Linking static target lib/librte_rib.a 00:02:34.826 [442/743] Linking target lib/librte_reorder.so.23.0 00:02:34.826 [443/743] Linking target lib/librte_power.so.23.0 00:02:35.085 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.085 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:35.085 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.085 [447/743] Linking static target lib/librte_security.a 00:02:35.344 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.344 [449/743] Linking target lib/librte_rib.so.23.0 00:02:35.344 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.344 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:35.602 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:35.602 [453/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:35.602 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.602 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.602 [456/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.602 [457/743] Linking target lib/librte_security.so.23.0 00:02:35.860 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:35.860 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:35.860 [460/743] Linking static target lib/librte_sched.a 00:02:36.426 [461/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:36.426 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:36.426 [463/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.426 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.426 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:36.426 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:36.426 [467/743] Linking target lib/librte_sched.so.23.0 00:02:36.426 [468/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.426 [469/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:36.685 [470/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:36.685 [471/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:36.943 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:36.943 [473/743] Generating lib/rte_fib_def with a custom command 00:02:36.943 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:36.943 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:36.943 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:37.202 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:37.202 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:37.202 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:37.202 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:37.469 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:37.469 [482/743] Linking static target lib/librte_ipsec.a 00:02:37.742 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.743 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:37.743 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:38.000 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:38.000 [487/743] Linking static target lib/librte_fib.a 00:02:38.000 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:38.000 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:38.000 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:38.258 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:38.258 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.258 [493/743] Linking target lib/librte_fib.so.23.0 00:02:38.516 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:39.083 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:39.083 [496/743] Generating lib/rte_port_def with a custom command 00:02:39.083 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:39.083 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:39.083 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:39.083 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:39.083 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:39.083 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:39.083 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:39.341 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:39.341 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:39.341 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:39.600 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:39.600 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:39.600 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:39.600 [510/743] Linking static target lib/librte_port.a 00:02:39.858 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:40.117 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:40.117 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.117 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:40.117 [515/743] Linking target lib/librte_port.so.23.0 00:02:40.117 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:40.375 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:40.375 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:40.375 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:40.375 [520/743] Linking static target lib/librte_pdump.a 00:02:40.634 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.634 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:40.892 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:40.892 [524/743] Generating lib/rte_table_def with a custom command 00:02:40.892 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:40.892 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:40.892 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:41.151 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:41.151 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:41.151 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:41.409 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:41.409 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:41.409 [533/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:41.666 [534/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:41.666 [535/743] Linking static target lib/librte_table.a 00:02:41.666 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:41.666 [537/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.923 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:42.182 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.182 [540/743] Linking target lib/librte_table.so.23.0 00:02:42.182 [541/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:42.440 [542/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:42.440 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:42.440 [544/743] Generating lib/rte_graph_def with a custom command 00:02:42.440 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:42.440 [546/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:42.440 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:42.699 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:42.957 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:42.957 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:42.957 [551/743] Linking static target lib/librte_graph.a 00:02:43.216 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:43.216 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:43.216 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:43.216 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:43.782 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:43.782 [557/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:43.782 [558/743] Generating lib/rte_node_def with a custom command 00:02:43.782 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:43.782 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.782 [561/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:43.782 [562/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.041 [563/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:44.041 [564/743] Linking target lib/librte_graph.so.23.0 00:02:44.041 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.041 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:44.041 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:44.041 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:44.041 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.041 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.041 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.299 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:44.299 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:44.299 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:44.299 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:44.299 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:44.299 [577/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.299 [578/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.299 [579/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.557 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:44.557 [581/743] Linking static target lib/librte_node.a 00:02:44.557 [582/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.557 [583/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.557 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.816 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.816 [586/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.816 [587/743] Linking static target drivers/librte_bus_vdev.a 00:02:44.816 [588/743] Linking target lib/librte_node.so.23.0 00:02:44.816 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.816 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.816 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.816 [592/743] Linking static target drivers/librte_bus_pci.a 00:02:45.074 [593/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.074 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.074 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:45.332 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:45.332 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:45.332 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:45.332 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:45.332 [600/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.332 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:45.591 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.591 [603/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.591 [604/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:45.591 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.849 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.849 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:45.849 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.849 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:45.849 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:46.415 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:46.673 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:46.673 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:46.673 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:47.239 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:47.239 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:47.239 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:47.497 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:48.068 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:48.068 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:48.068 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:48.068 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:48.068 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:48.326 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:48.326 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:49.286 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:49.544 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:49.544 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:49.544 [629/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:49.801 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:49.801 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:49.801 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:49.801 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:49.801 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:50.059 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:50.059 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:50.625 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:50.625 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:50.625 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:50.882 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:50.882 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:50.882 [642/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:50.882 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:50.882 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:50.882 [645/743] Linking static target drivers/librte_net_i40e.a 00:02:51.140 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:51.140 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:51.140 [648/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.140 [649/743] Linking static target lib/librte_vhost.a 00:02:51.397 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:51.654 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:51.654 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.654 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:51.654 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:51.911 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:51.911 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:52.168 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:52.425 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.425 [659/743] Linking target lib/librte_vhost.so.23.0 00:02:52.682 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:52.682 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:52.682 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:52.683 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:52.683 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:52.683 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:52.940 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:52.940 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:53.199 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:53.199 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:53.457 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:53.457 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:53.716 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:53.716 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:54.281 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:54.281 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:54.539 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:54.539 [677/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:54.539 [678/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:54.798 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:54.798 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:54.798 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:55.056 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:55.314 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:55.314 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:55.314 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.572 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.572 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.572 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.829 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:56.087 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:56.087 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:56.087 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:56.087 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:56.087 [694/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:56.345 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:56.603 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.603 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.861 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:57.119 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:57.377 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:57.377 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:57.635 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:57.635 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:57.894 [704/743] Linking static target lib/librte_pipeline.a 00:02:57.894 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:57.894 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:57.894 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:58.152 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:58.153 [709/743] Linking target app/dpdk-dumpcap 00:02:58.411 [710/743] Linking target app/dpdk-pdump 00:02:58.411 [711/743] Linking target app/dpdk-proc-info 00:02:58.411 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:58.411 [713/743] Linking target app/dpdk-test-acl 00:02:58.670 [714/743] Linking target app/dpdk-test-bbdev 00:02:58.670 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:58.670 [716/743] Linking target app/dpdk-test-compress-perf 00:02:58.670 [717/743] Linking target app/dpdk-test-cmdline 00:02:58.929 [718/743] Linking target app/dpdk-test-crypto-perf 00:02:58.929 [719/743] Linking target app/dpdk-test-eventdev 00:02:59.188 [720/743] Linking target app/dpdk-test-fib 00:02:59.188 [721/743] Linking target app/dpdk-test-flow-perf 00:02:59.188 [722/743] Linking target app/dpdk-test-gpudev 00:02:59.188 [723/743] Linking target app/dpdk-test-pipeline 00:02:59.188 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:59.447 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:59.705 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:59.705 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:59.964 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:59.964 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:59.964 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:00.222 [731/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:00.481 [732/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:00.481 [733/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.481 [734/743] Linking target lib/librte_pipeline.so.23.0 00:03:00.741 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:00.741 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:00.741 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:00.999 [738/743] Linking target app/dpdk-test-sad 00:03:00.999 [739/743] Linking target app/dpdk-test-regex 00:03:01.258 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:01.258 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:01.517 [742/743] Linking target app/dpdk-test-security-perf 00:03:01.775 [743/743] Linking target app/dpdk-testpmd 00:03:01.775 21:10:25 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:01.775 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:01.775 [0/1] Installing files. 00:03:02.037 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:02.037 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.038 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.039 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.040 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.300 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:02.301 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:02.301 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.301 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.564 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:02.565 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:02.565 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:02.565 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.565 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:02.565 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.565 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.566 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.567 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:02.568 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:02.568 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:02.568 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:02.568 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:02.568 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:02.568 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:02.568 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:02.568 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:02.568 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:02.568 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:02.568 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:02.568 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:02.568 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:02.568 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:02.568 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:02.568 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:02.568 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:02.568 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:02.568 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:02.568 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:02.568 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:02.568 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:02.568 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:02.568 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:02.568 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:02.568 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:02.568 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:02.568 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:02.568 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:02.568 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:02.568 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:02.568 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:02.568 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:02.568 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:02.568 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:02.568 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:02.568 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:02.568 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:02.568 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:02.568 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:02.568 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:02.568 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:02.568 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:02.568 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:02.568 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:02.568 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:02.568 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:02.568 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:02.568 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:02.568 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:02.568 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:02.568 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:02.568 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:02.568 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:02.568 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:02.568 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:02.568 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:02.568 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:02.568 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:02.568 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:02.568 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:02.568 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:02.568 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:02.568 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:02.568 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:02.568 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:02.568 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:02.568 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:02.568 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:02.568 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:02.568 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:02.568 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:02.568 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:02.568 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:02.568 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:02.568 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:02.568 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:02.568 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:02.568 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:02.568 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:02.568 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:02.569 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:02.569 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:02.569 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:02.569 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:02.569 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:02.569 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:02.569 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:02.569 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:02.569 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:02.569 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:02.569 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:02.569 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:02.569 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:02.569 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:02.569 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:02.569 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:02.569 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:02.569 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:02.569 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:02.569 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:02.569 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:02.569 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:02.569 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:02.569 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:02.569 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:02.569 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:02.569 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:02.569 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:02.569 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:02.569 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:02.569 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:02.569 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:02.569 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:02.569 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:02.569 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:02.569 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:02.569 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:02.569 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:02.569 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:02.569 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:02.569 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:02.569 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:02.569 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:02.569 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:02.569 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:02.829 21:10:26 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:02.829 21:10:26 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:02.829 21:10:26 -- common/autobuild_common.sh@203 -- $ cat 00:03:02.829 21:10:26 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:02.829 00:03:02.829 real 0m52.350s 00:03:02.829 user 6m13.791s 00:03:02.829 sys 0m55.747s 00:03:02.829 21:10:26 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:02.829 21:10:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.829 ************************************ 00:03:02.829 END TEST build_native_dpdk 00:03:02.829 ************************************ 00:03:02.829 21:10:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:02.829 21:10:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:02.829 21:10:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:02.829 21:10:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:02.829 21:10:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:02.829 21:10:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:02.829 21:10:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:02.829 21:10:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:02.829 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:03.090 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.090 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:03.090 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:03.386 Using 'verbs' RDMA provider 00:03:16.546 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:31.428 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:31.428 Creating mk/config.mk...done. 00:03:31.428 Creating mk/cc.flags.mk...done. 00:03:31.428 Type 'make' to build. 00:03:31.428 21:10:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:31.428 21:10:52 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:31.428 21:10:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:31.428 21:10:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.428 ************************************ 00:03:31.428 START TEST make 00:03:31.428 ************************************ 00:03:31.428 21:10:53 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:31.428 make[1]: Nothing to be done for 'all'. 00:03:53.356 CC lib/ut_mock/mock.o 00:03:53.356 CC lib/ut/ut.o 00:03:53.356 CC lib/log/log.o 00:03:53.356 CC lib/log/log_flags.o 00:03:53.356 CC lib/log/log_deprecated.o 00:03:53.356 LIB libspdk_ut_mock.a 00:03:53.356 SO libspdk_ut_mock.so.5.0 00:03:53.356 LIB libspdk_ut.a 00:03:53.356 LIB libspdk_log.a 00:03:53.356 SO libspdk_ut.so.1.0 00:03:53.356 SYMLINK libspdk_ut_mock.so 00:03:53.356 SO libspdk_log.so.6.1 00:03:53.356 SYMLINK libspdk_ut.so 00:03:53.356 SYMLINK libspdk_log.so 00:03:53.356 CC lib/ioat/ioat.o 00:03:53.356 CC lib/dma/dma.o 00:03:53.356 CC lib/util/base64.o 00:03:53.356 CXX lib/trace_parser/trace.o 00:03:53.356 CC lib/util/bit_array.o 00:03:53.356 CC lib/util/cpuset.o 00:03:53.356 CC lib/util/crc16.o 00:03:53.356 CC lib/util/crc32.o 00:03:53.356 CC lib/util/crc32c.o 00:03:53.356 CC lib/vfio_user/host/vfio_user_pci.o 00:03:53.356 CC lib/vfio_user/host/vfio_user.o 00:03:53.356 CC lib/util/crc32_ieee.o 00:03:53.356 CC lib/util/crc64.o 00:03:53.356 CC lib/util/dif.o 00:03:53.356 LIB libspdk_dma.a 00:03:53.356 CC lib/util/fd.o 00:03:53.356 SO libspdk_dma.so.3.0 00:03:53.356 CC lib/util/file.o 00:03:53.356 SYMLINK libspdk_dma.so 00:03:53.356 CC lib/util/hexlify.o 00:03:53.356 LIB libspdk_ioat.a 00:03:53.356 CC lib/util/iov.o 00:03:53.356 CC lib/util/math.o 00:03:53.356 SO libspdk_ioat.so.6.0 00:03:53.356 LIB libspdk_vfio_user.a 00:03:53.356 CC lib/util/pipe.o 00:03:53.356 CC lib/util/strerror_tls.o 00:03:53.356 SYMLINK libspdk_ioat.so 00:03:53.356 CC lib/util/string.o 00:03:53.356 SO libspdk_vfio_user.so.4.0 00:03:53.356 CC lib/util/uuid.o 00:03:53.356 CC lib/util/fd_group.o 00:03:53.356 SYMLINK libspdk_vfio_user.so 00:03:53.356 CC lib/util/xor.o 00:03:53.356 CC lib/util/zipf.o 00:03:53.356 LIB libspdk_util.a 00:03:53.356 SO libspdk_util.so.8.0 00:03:53.615 SYMLINK libspdk_util.so 00:03:53.615 LIB libspdk_trace_parser.a 00:03:53.615 CC lib/rdma/rdma_verbs.o 00:03:53.615 CC lib/rdma/common.o 00:03:53.615 CC lib/env_dpdk/memory.o 00:03:53.615 CC lib/env_dpdk/env.o 00:03:53.615 CC lib/env_dpdk/pci.o 00:03:53.615 CC lib/idxd/idxd.o 00:03:53.615 CC lib/conf/conf.o 00:03:53.615 CC lib/vmd/vmd.o 00:03:53.615 CC lib/json/json_parse.o 00:03:53.615 SO libspdk_trace_parser.so.4.0 00:03:53.874 SYMLINK libspdk_trace_parser.so 00:03:53.874 CC lib/vmd/led.o 00:03:53.874 CC lib/env_dpdk/init.o 00:03:53.874 LIB libspdk_conf.a 00:03:53.874 SO libspdk_conf.so.5.0 00:03:53.874 CC lib/json/json_util.o 00:03:53.874 LIB libspdk_rdma.a 00:03:54.133 CC lib/idxd/idxd_user.o 00:03:54.133 SO libspdk_rdma.so.5.0 00:03:54.133 SYMLINK libspdk_conf.so 00:03:54.133 CC lib/idxd/idxd_kernel.o 00:03:54.133 CC lib/json/json_write.o 00:03:54.133 SYMLINK libspdk_rdma.so 00:03:54.133 CC lib/env_dpdk/threads.o 00:03:54.133 CC lib/env_dpdk/pci_ioat.o 00:03:54.133 CC lib/env_dpdk/pci_virtio.o 00:03:54.133 CC lib/env_dpdk/pci_vmd.o 00:03:54.133 CC lib/env_dpdk/pci_idxd.o 00:03:54.133 CC lib/env_dpdk/pci_event.o 00:03:54.133 CC lib/env_dpdk/sigbus_handler.o 00:03:54.133 LIB libspdk_idxd.a 00:03:54.391 CC lib/env_dpdk/pci_dpdk.o 00:03:54.391 SO libspdk_idxd.so.11.0 00:03:54.391 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.391 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.391 LIB libspdk_json.a 00:03:54.391 SYMLINK libspdk_idxd.so 00:03:54.391 LIB libspdk_vmd.a 00:03:54.391 SO libspdk_json.so.5.1 00:03:54.391 SO libspdk_vmd.so.5.0 00:03:54.391 SYMLINK libspdk_json.so 00:03:54.391 SYMLINK libspdk_vmd.so 00:03:54.649 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.649 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.649 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.649 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.907 LIB libspdk_jsonrpc.a 00:03:54.907 SO libspdk_jsonrpc.so.5.1 00:03:54.907 SYMLINK libspdk_jsonrpc.so 00:03:55.166 LIB libspdk_env_dpdk.a 00:03:55.166 CC lib/rpc/rpc.o 00:03:55.166 SO libspdk_env_dpdk.so.13.0 00:03:55.424 SYMLINK libspdk_env_dpdk.so 00:03:55.424 LIB libspdk_rpc.a 00:03:55.424 SO libspdk_rpc.so.5.0 00:03:55.424 SYMLINK libspdk_rpc.so 00:03:55.682 CC lib/sock/sock.o 00:03:55.682 CC lib/trace/trace.o 00:03:55.682 CC lib/sock/sock_rpc.o 00:03:55.682 CC lib/trace/trace_rpc.o 00:03:55.682 CC lib/trace/trace_flags.o 00:03:55.682 CC lib/notify/notify.o 00:03:55.682 CC lib/notify/notify_rpc.o 00:03:55.682 LIB libspdk_notify.a 00:03:55.940 SO libspdk_notify.so.5.0 00:03:55.940 LIB libspdk_trace.a 00:03:55.940 SO libspdk_trace.so.9.0 00:03:55.940 SYMLINK libspdk_notify.so 00:03:55.941 SYMLINK libspdk_trace.so 00:03:55.941 LIB libspdk_sock.a 00:03:56.199 SO libspdk_sock.so.8.0 00:03:56.199 SYMLINK libspdk_sock.so 00:03:56.199 CC lib/thread/thread.o 00:03:56.199 CC lib/thread/iobuf.o 00:03:56.199 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:56.199 CC lib/nvme/nvme_ctrlr.o 00:03:56.199 CC lib/nvme/nvme_fabric.o 00:03:56.199 CC lib/nvme/nvme_ns_cmd.o 00:03:56.199 CC lib/nvme/nvme_ns.o 00:03:56.199 CC lib/nvme/nvme_pcie_common.o 00:03:56.199 CC lib/nvme/nvme_qpair.o 00:03:56.199 CC lib/nvme/nvme_pcie.o 00:03:56.457 CC lib/nvme/nvme.o 00:03:57.024 CC lib/nvme/nvme_quirks.o 00:03:57.024 CC lib/nvme/nvme_transport.o 00:03:57.282 CC lib/nvme/nvme_discovery.o 00:03:57.282 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:57.282 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:57.282 CC lib/nvme/nvme_tcp.o 00:03:57.540 CC lib/nvme/nvme_opal.o 00:03:57.540 CC lib/nvme/nvme_io_msg.o 00:03:57.797 CC lib/nvme/nvme_poll_group.o 00:03:57.797 LIB libspdk_thread.a 00:03:57.797 SO libspdk_thread.so.9.0 00:03:57.797 SYMLINK libspdk_thread.so 00:03:57.797 CC lib/nvme/nvme_zns.o 00:03:57.797 CC lib/nvme/nvme_cuse.o 00:03:58.055 CC lib/accel/accel.o 00:03:58.055 CC lib/blob/blobstore.o 00:03:58.055 CC lib/accel/accel_rpc.o 00:03:58.055 CC lib/init/json_config.o 00:03:58.055 CC lib/init/subsystem.o 00:03:58.055 CC lib/init/subsystem_rpc.o 00:03:58.312 CC lib/accel/accel_sw.o 00:03:58.312 CC lib/init/rpc.o 00:03:58.312 CC lib/blob/request.o 00:03:58.592 CC lib/virtio/virtio.o 00:03:58.592 LIB libspdk_init.a 00:03:58.592 CC lib/nvme/nvme_vfio_user.o 00:03:58.592 CC lib/nvme/nvme_rdma.o 00:03:58.592 SO libspdk_init.so.4.0 00:03:58.592 CC lib/virtio/virtio_vhost_user.o 00:03:58.592 SYMLINK libspdk_init.so 00:03:58.592 CC lib/blob/zeroes.o 00:03:58.592 CC lib/event/app.o 00:03:58.849 CC lib/event/reactor.o 00:03:58.849 CC lib/blob/blob_bs_dev.o 00:03:58.849 CC lib/virtio/virtio_vfio_user.o 00:03:58.849 CC lib/virtio/virtio_pci.o 00:03:58.849 CC lib/event/log_rpc.o 00:03:59.106 LIB libspdk_accel.a 00:03:59.106 CC lib/event/app_rpc.o 00:03:59.106 CC lib/event/scheduler_static.o 00:03:59.106 SO libspdk_accel.so.14.0 00:03:59.106 SYMLINK libspdk_accel.so 00:03:59.106 LIB libspdk_virtio.a 00:03:59.106 SO libspdk_virtio.so.6.0 00:03:59.106 CC lib/bdev/bdev.o 00:03:59.106 CC lib/bdev/bdev_rpc.o 00:03:59.106 CC lib/bdev/bdev_zone.o 00:03:59.106 CC lib/bdev/part.o 00:03:59.106 CC lib/bdev/scsi_nvme.o 00:03:59.363 SYMLINK libspdk_virtio.so 00:03:59.363 LIB libspdk_event.a 00:03:59.363 SO libspdk_event.so.12.0 00:03:59.363 SYMLINK libspdk_event.so 00:03:59.929 LIB libspdk_nvme.a 00:04:00.187 SO libspdk_nvme.so.12.0 00:04:00.445 SYMLINK libspdk_nvme.so 00:04:00.705 LIB libspdk_blob.a 00:04:00.705 SO libspdk_blob.so.10.1 00:04:00.964 SYMLINK libspdk_blob.so 00:04:00.964 CC lib/blobfs/blobfs.o 00:04:00.964 CC lib/blobfs/tree.o 00:04:00.964 CC lib/lvol/lvol.o 00:04:01.895 LIB libspdk_bdev.a 00:04:01.895 SO libspdk_bdev.so.14.0 00:04:01.895 LIB libspdk_blobfs.a 00:04:01.895 LIB libspdk_lvol.a 00:04:01.895 SYMLINK libspdk_bdev.so 00:04:01.895 SO libspdk_blobfs.so.9.0 00:04:02.154 SO libspdk_lvol.so.9.1 00:04:02.154 SYMLINK libspdk_blobfs.so 00:04:02.154 SYMLINK libspdk_lvol.so 00:04:02.154 CC lib/scsi/dev.o 00:04:02.154 CC lib/scsi/lun.o 00:04:02.154 CC lib/scsi/port.o 00:04:02.154 CC lib/scsi/scsi.o 00:04:02.154 CC lib/ublk/ublk.o 00:04:02.154 CC lib/scsi/scsi_bdev.o 00:04:02.154 CC lib/ublk/ublk_rpc.o 00:04:02.154 CC lib/nvmf/ctrlr.o 00:04:02.154 CC lib/ftl/ftl_core.o 00:04:02.154 CC lib/nbd/nbd.o 00:04:02.412 CC lib/scsi/scsi_pr.o 00:04:02.412 CC lib/scsi/scsi_rpc.o 00:04:02.412 CC lib/nbd/nbd_rpc.o 00:04:02.412 CC lib/scsi/task.o 00:04:02.412 CC lib/ftl/ftl_init.o 00:04:02.412 CC lib/ftl/ftl_layout.o 00:04:02.412 CC lib/ftl/ftl_debug.o 00:04:02.412 CC lib/ftl/ftl_io.o 00:04:02.670 LIB libspdk_nbd.a 00:04:02.670 SO libspdk_nbd.so.6.0 00:04:02.670 CC lib/ftl/ftl_sb.o 00:04:02.670 CC lib/ftl/ftl_l2p.o 00:04:02.670 LIB libspdk_scsi.a 00:04:02.670 SYMLINK libspdk_nbd.so 00:04:02.670 CC lib/ftl/ftl_l2p_flat.o 00:04:02.670 CC lib/nvmf/ctrlr_discovery.o 00:04:02.670 SO libspdk_scsi.so.8.0 00:04:02.670 CC lib/ftl/ftl_nv_cache.o 00:04:02.670 CC lib/nvmf/ctrlr_bdev.o 00:04:02.670 SYMLINK libspdk_scsi.so 00:04:02.670 LIB libspdk_ublk.a 00:04:02.670 CC lib/nvmf/subsystem.o 00:04:02.929 CC lib/nvmf/nvmf.o 00:04:02.929 CC lib/nvmf/nvmf_rpc.o 00:04:02.929 SO libspdk_ublk.so.2.0 00:04:02.929 CC lib/nvmf/transport.o 00:04:02.929 CC lib/nvmf/tcp.o 00:04:02.929 SYMLINK libspdk_ublk.so 00:04:02.929 CC lib/iscsi/conn.o 00:04:03.187 CC lib/nvmf/rdma.o 00:04:03.446 CC lib/iscsi/init_grp.o 00:04:03.446 CC lib/iscsi/iscsi.o 00:04:03.704 CC lib/iscsi/md5.o 00:04:03.704 CC lib/iscsi/param.o 00:04:03.704 CC lib/ftl/ftl_band.o 00:04:03.704 CC lib/iscsi/portal_grp.o 00:04:03.704 CC lib/vhost/vhost.o 00:04:03.704 CC lib/ftl/ftl_band_ops.o 00:04:03.704 CC lib/vhost/vhost_rpc.o 00:04:03.963 CC lib/iscsi/tgt_node.o 00:04:03.963 CC lib/iscsi/iscsi_subsystem.o 00:04:03.963 CC lib/iscsi/iscsi_rpc.o 00:04:04.235 CC lib/ftl/ftl_writer.o 00:04:04.235 CC lib/iscsi/task.o 00:04:04.502 CC lib/vhost/vhost_scsi.o 00:04:04.502 CC lib/ftl/ftl_rq.o 00:04:04.502 CC lib/vhost/vhost_blk.o 00:04:04.502 CC lib/vhost/rte_vhost_user.o 00:04:04.502 CC lib/ftl/ftl_reloc.o 00:04:04.502 CC lib/ftl/ftl_l2p_cache.o 00:04:04.502 CC lib/ftl/ftl_p2l.o 00:04:04.502 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.502 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.760 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:04.760 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:04.760 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:04.760 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.018 LIB libspdk_iscsi.a 00:04:05.018 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.018 SO libspdk_iscsi.so.7.0 00:04:05.018 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.018 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.276 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:05.276 SYMLINK libspdk_iscsi.so 00:04:05.276 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:05.276 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:05.276 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:05.276 CC lib/ftl/utils/ftl_conf.o 00:04:05.276 CC lib/ftl/utils/ftl_md.o 00:04:05.276 CC lib/ftl/utils/ftl_mempool.o 00:04:05.534 LIB libspdk_nvmf.a 00:04:05.534 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.534 CC lib/ftl/utils/ftl_property.o 00:04:05.534 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.534 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.534 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.534 SO libspdk_nvmf.so.17.0 00:04:05.534 LIB libspdk_vhost.a 00:04:05.534 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.534 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:05.534 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.534 SO libspdk_vhost.so.7.1 00:04:05.792 SYMLINK libspdk_nvmf.so 00:04:05.792 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.792 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.792 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.792 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.792 CC lib/ftl/base/ftl_base_dev.o 00:04:05.792 SYMLINK libspdk_vhost.so 00:04:05.792 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.792 CC lib/ftl/ftl_trace.o 00:04:06.050 LIB libspdk_ftl.a 00:04:06.308 SO libspdk_ftl.so.8.0 00:04:06.565 SYMLINK libspdk_ftl.so 00:04:06.824 CC module/env_dpdk/env_dpdk_rpc.o 00:04:06.824 CC module/accel/ioat/accel_ioat.o 00:04:06.824 CC module/accel/iaa/accel_iaa.o 00:04:06.824 CC module/blob/bdev/blob_bdev.o 00:04:06.824 CC module/accel/dsa/accel_dsa.o 00:04:06.824 CC module/scheduler/gscheduler/gscheduler.o 00:04:06.824 CC module/accel/error/accel_error.o 00:04:06.824 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:06.824 CC module/sock/posix/posix.o 00:04:06.824 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.081 LIB libspdk_env_dpdk_rpc.a 00:04:07.081 SO libspdk_env_dpdk_rpc.so.5.0 00:04:07.081 LIB libspdk_scheduler_gscheduler.a 00:04:07.081 LIB libspdk_scheduler_dpdk_governor.a 00:04:07.081 SO libspdk_scheduler_gscheduler.so.3.0 00:04:07.081 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.081 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:07.081 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.082 CC module/accel/error/accel_error_rpc.o 00:04:07.082 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.082 LIB libspdk_scheduler_dynamic.a 00:04:07.082 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.082 SYMLINK libspdk_scheduler_gscheduler.so 00:04:07.082 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:07.082 SO libspdk_scheduler_dynamic.so.3.0 00:04:07.082 LIB libspdk_blob_bdev.a 00:04:07.082 SO libspdk_blob_bdev.so.10.1 00:04:07.082 CC module/sock/uring/uring.o 00:04:07.340 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.340 LIB libspdk_accel_ioat.a 00:04:07.340 LIB libspdk_accel_iaa.a 00:04:07.340 LIB libspdk_accel_error.a 00:04:07.340 SYMLINK libspdk_blob_bdev.so 00:04:07.340 SO libspdk_accel_ioat.so.5.0 00:04:07.340 SO libspdk_accel_iaa.so.2.0 00:04:07.340 SO libspdk_accel_error.so.1.0 00:04:07.340 LIB libspdk_accel_dsa.a 00:04:07.340 SYMLINK libspdk_accel_ioat.so 00:04:07.340 SYMLINK libspdk_accel_error.so 00:04:07.340 SO libspdk_accel_dsa.so.4.0 00:04:07.340 SYMLINK libspdk_accel_iaa.so 00:04:07.340 SYMLINK libspdk_accel_dsa.so 00:04:07.340 CC module/bdev/gpt/gpt.o 00:04:07.340 CC module/bdev/lvol/vbdev_lvol.o 00:04:07.340 CC module/bdev/malloc/bdev_malloc.o 00:04:07.340 CC module/blobfs/bdev/blobfs_bdev.o 00:04:07.340 CC module/bdev/error/vbdev_error.o 00:04:07.340 CC module/bdev/delay/vbdev_delay.o 00:04:07.598 CC module/bdev/null/bdev_null.o 00:04:07.598 CC module/bdev/nvme/bdev_nvme.o 00:04:07.598 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:07.598 CC module/bdev/gpt/vbdev_gpt.o 00:04:07.598 LIB libspdk_sock_posix.a 00:04:07.598 SO libspdk_sock_posix.so.5.0 00:04:07.857 CC module/bdev/null/bdev_null_rpc.o 00:04:07.857 SYMLINK libspdk_sock_posix.so 00:04:07.857 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.857 LIB libspdk_blobfs_bdev.a 00:04:07.857 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:07.857 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.857 SO libspdk_blobfs_bdev.so.5.0 00:04:07.857 LIB libspdk_bdev_null.a 00:04:07.857 LIB libspdk_sock_uring.a 00:04:07.857 LIB libspdk_bdev_gpt.a 00:04:07.857 LIB libspdk_bdev_error.a 00:04:08.115 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.115 SYMLINK libspdk_blobfs_bdev.so 00:04:08.115 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.115 SO libspdk_bdev_null.so.5.0 00:04:08.115 SO libspdk_sock_uring.so.4.0 00:04:08.115 SO libspdk_bdev_gpt.so.5.0 00:04:08.115 SO libspdk_bdev_error.so.5.0 00:04:08.115 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.115 LIB libspdk_bdev_malloc.a 00:04:08.115 LIB libspdk_bdev_delay.a 00:04:08.115 SYMLINK libspdk_bdev_null.so 00:04:08.115 SYMLINK libspdk_bdev_gpt.so 00:04:08.115 SO libspdk_bdev_malloc.so.5.0 00:04:08.115 SYMLINK libspdk_bdev_error.so 00:04:08.115 SYMLINK libspdk_sock_uring.so 00:04:08.115 SO libspdk_bdev_delay.so.5.0 00:04:08.115 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.115 SYMLINK libspdk_bdev_malloc.so 00:04:08.115 SYMLINK libspdk_bdev_delay.so 00:04:08.115 CC module/bdev/raid/bdev_raid.o 00:04:08.115 CC module/bdev/split/vbdev_split.o 00:04:08.373 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:08.373 CC module/bdev/uring/bdev_uring.o 00:04:08.373 CC module/bdev/aio/bdev_aio.o 00:04:08.373 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.373 LIB libspdk_bdev_passthru.a 00:04:08.373 SO libspdk_bdev_passthru.so.5.0 00:04:08.373 LIB libspdk_bdev_lvol.a 00:04:08.373 SO libspdk_bdev_lvol.so.5.0 00:04:08.373 SYMLINK libspdk_bdev_passthru.so 00:04:08.373 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.373 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.373 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:08.631 SYMLINK libspdk_bdev_lvol.so 00:04:08.631 CC module/bdev/ftl/bdev_ftl.o 00:04:08.631 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:08.631 CC module/bdev/uring/bdev_uring_rpc.o 00:04:08.631 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.631 LIB libspdk_bdev_aio.a 00:04:08.631 LIB libspdk_bdev_split.a 00:04:08.631 LIB libspdk_bdev_zone_block.a 00:04:08.631 CC module/bdev/raid/raid0.o 00:04:08.631 SO libspdk_bdev_aio.so.5.0 00:04:08.631 SO libspdk_bdev_split.so.5.0 00:04:08.631 SO libspdk_bdev_zone_block.so.5.0 00:04:08.890 SYMLINK libspdk_bdev_aio.so 00:04:08.890 CC module/bdev/raid/raid1.o 00:04:08.890 SYMLINK libspdk_bdev_split.so 00:04:08.890 SYMLINK libspdk_bdev_zone_block.so 00:04:08.890 LIB libspdk_bdev_uring.a 00:04:08.890 SO libspdk_bdev_uring.so.5.0 00:04:08.890 CC module/bdev/raid/concat.o 00:04:08.890 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:08.890 CC module/bdev/iscsi/bdev_iscsi.o 00:04:08.890 LIB libspdk_bdev_ftl.a 00:04:08.890 SYMLINK libspdk_bdev_uring.so 00:04:08.890 CC module/bdev/nvme/nvme_rpc.o 00:04:08.890 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.890 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.890 SO libspdk_bdev_ftl.so.5.0 00:04:09.148 SYMLINK libspdk_bdev_ftl.so 00:04:09.148 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.148 CC module/bdev/nvme/bdev_mdns_client.o 00:04:09.148 CC module/bdev/nvme/vbdev_opal.o 00:04:09.148 LIB libspdk_bdev_raid.a 00:04:09.148 SO libspdk_bdev_raid.so.5.0 00:04:09.148 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:09.148 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:09.406 SYMLINK libspdk_bdev_raid.so 00:04:09.406 LIB libspdk_bdev_iscsi.a 00:04:09.406 SO libspdk_bdev_iscsi.so.5.0 00:04:09.406 LIB libspdk_bdev_virtio.a 00:04:09.406 SYMLINK libspdk_bdev_iscsi.so 00:04:09.406 SO libspdk_bdev_virtio.so.5.0 00:04:09.665 SYMLINK libspdk_bdev_virtio.so 00:04:09.923 LIB libspdk_bdev_nvme.a 00:04:09.923 SO libspdk_bdev_nvme.so.6.0 00:04:10.182 SYMLINK libspdk_bdev_nvme.so 00:04:10.441 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:10.441 CC module/event/subsystems/scheduler/scheduler.o 00:04:10.441 CC module/event/subsystems/sock/sock.o 00:04:10.441 CC module/event/subsystems/iobuf/iobuf.o 00:04:10.441 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:10.441 CC module/event/subsystems/vmd/vmd.o 00:04:10.441 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:10.441 LIB libspdk_event_vhost_blk.a 00:04:10.700 LIB libspdk_event_scheduler.a 00:04:10.700 SO libspdk_event_vhost_blk.so.2.0 00:04:10.700 LIB libspdk_event_sock.a 00:04:10.700 LIB libspdk_event_vmd.a 00:04:10.700 SO libspdk_event_scheduler.so.3.0 00:04:10.700 LIB libspdk_event_iobuf.a 00:04:10.700 SO libspdk_event_sock.so.4.0 00:04:10.700 SYMLINK libspdk_event_vhost_blk.so 00:04:10.700 SO libspdk_event_vmd.so.5.0 00:04:10.700 SO libspdk_event_iobuf.so.2.0 00:04:10.700 SYMLINK libspdk_event_sock.so 00:04:10.700 SYMLINK libspdk_event_scheduler.so 00:04:10.700 SYMLINK libspdk_event_vmd.so 00:04:10.700 SYMLINK libspdk_event_iobuf.so 00:04:10.960 CC module/event/subsystems/accel/accel.o 00:04:10.960 LIB libspdk_event_accel.a 00:04:10.960 SO libspdk_event_accel.so.5.0 00:04:11.219 SYMLINK libspdk_event_accel.so 00:04:11.219 CC module/event/subsystems/bdev/bdev.o 00:04:11.478 LIB libspdk_event_bdev.a 00:04:11.478 SO libspdk_event_bdev.so.5.0 00:04:11.478 SYMLINK libspdk_event_bdev.so 00:04:11.735 CC module/event/subsystems/nbd/nbd.o 00:04:11.735 CC module/event/subsystems/ublk/ublk.o 00:04:11.735 CC module/event/subsystems/scsi/scsi.o 00:04:11.735 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:11.735 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:11.735 LIB libspdk_event_ublk.a 00:04:11.994 LIB libspdk_event_nbd.a 00:04:11.994 SO libspdk_event_ublk.so.2.0 00:04:11.994 SO libspdk_event_nbd.so.5.0 00:04:11.994 LIB libspdk_event_scsi.a 00:04:11.994 SO libspdk_event_scsi.so.5.0 00:04:11.994 SYMLINK libspdk_event_ublk.so 00:04:11.994 SYMLINK libspdk_event_nbd.so 00:04:11.994 SYMLINK libspdk_event_scsi.so 00:04:11.994 LIB libspdk_event_nvmf.a 00:04:11.994 SO libspdk_event_nvmf.so.5.0 00:04:12.253 SYMLINK libspdk_event_nvmf.so 00:04:12.253 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:12.253 CC module/event/subsystems/iscsi/iscsi.o 00:04:12.253 LIB libspdk_event_iscsi.a 00:04:12.253 LIB libspdk_event_vhost_scsi.a 00:04:12.253 SO libspdk_event_vhost_scsi.so.2.0 00:04:12.253 SO libspdk_event_iscsi.so.5.0 00:04:12.535 SYMLINK libspdk_event_iscsi.so 00:04:12.535 SYMLINK libspdk_event_vhost_scsi.so 00:04:12.535 SO libspdk.so.5.0 00:04:12.535 SYMLINK libspdk.so 00:04:12.794 CXX app/trace/trace.o 00:04:12.794 TEST_HEADER include/spdk/accel.h 00:04:12.794 TEST_HEADER include/spdk/accel_module.h 00:04:12.794 CC app/trace_record/trace_record.o 00:04:12.794 TEST_HEADER include/spdk/assert.h 00:04:12.794 TEST_HEADER include/spdk/barrier.h 00:04:12.794 TEST_HEADER include/spdk/base64.h 00:04:12.794 TEST_HEADER include/spdk/bdev.h 00:04:12.794 TEST_HEADER include/spdk/bdev_module.h 00:04:12.794 TEST_HEADER include/spdk/bdev_zone.h 00:04:12.794 TEST_HEADER include/spdk/bit_array.h 00:04:12.794 TEST_HEADER include/spdk/bit_pool.h 00:04:12.794 TEST_HEADER include/spdk/blob_bdev.h 00:04:12.794 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:12.794 TEST_HEADER include/spdk/blobfs.h 00:04:12.794 TEST_HEADER include/spdk/blob.h 00:04:12.794 TEST_HEADER include/spdk/conf.h 00:04:12.794 TEST_HEADER include/spdk/config.h 00:04:12.794 TEST_HEADER include/spdk/cpuset.h 00:04:12.794 TEST_HEADER include/spdk/crc16.h 00:04:12.794 TEST_HEADER include/spdk/crc32.h 00:04:12.794 TEST_HEADER include/spdk/crc64.h 00:04:12.794 TEST_HEADER include/spdk/dif.h 00:04:12.794 TEST_HEADER include/spdk/dma.h 00:04:12.794 TEST_HEADER include/spdk/endian.h 00:04:12.794 TEST_HEADER include/spdk/env_dpdk.h 00:04:12.794 TEST_HEADER include/spdk/env.h 00:04:12.794 TEST_HEADER include/spdk/event.h 00:04:12.794 TEST_HEADER include/spdk/fd_group.h 00:04:12.794 TEST_HEADER include/spdk/fd.h 00:04:12.794 TEST_HEADER include/spdk/file.h 00:04:12.794 TEST_HEADER include/spdk/ftl.h 00:04:12.794 TEST_HEADER include/spdk/gpt_spec.h 00:04:12.794 TEST_HEADER include/spdk/hexlify.h 00:04:12.794 TEST_HEADER include/spdk/histogram_data.h 00:04:12.794 TEST_HEADER include/spdk/idxd.h 00:04:12.794 TEST_HEADER include/spdk/idxd_spec.h 00:04:12.794 TEST_HEADER include/spdk/init.h 00:04:12.794 TEST_HEADER include/spdk/ioat.h 00:04:12.794 TEST_HEADER include/spdk/ioat_spec.h 00:04:12.794 TEST_HEADER include/spdk/iscsi_spec.h 00:04:12.794 TEST_HEADER include/spdk/json.h 00:04:12.794 CC examples/accel/perf/accel_perf.o 00:04:12.794 TEST_HEADER include/spdk/jsonrpc.h 00:04:12.794 TEST_HEADER include/spdk/likely.h 00:04:12.794 TEST_HEADER include/spdk/log.h 00:04:12.794 CC test/blobfs/mkfs/mkfs.o 00:04:12.794 TEST_HEADER include/spdk/lvol.h 00:04:12.794 CC test/bdev/bdevio/bdevio.o 00:04:12.794 TEST_HEADER include/spdk/memory.h 00:04:12.794 CC examples/blob/hello_world/hello_blob.o 00:04:12.794 TEST_HEADER include/spdk/mmio.h 00:04:12.794 TEST_HEADER include/spdk/nbd.h 00:04:12.794 TEST_HEADER include/spdk/notify.h 00:04:12.794 TEST_HEADER include/spdk/nvme.h 00:04:12.794 TEST_HEADER include/spdk/nvme_intel.h 00:04:12.794 CC examples/bdev/hello_world/hello_bdev.o 00:04:12.794 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:12.794 CC test/accel/dif/dif.o 00:04:12.794 CC test/app/bdev_svc/bdev_svc.o 00:04:12.794 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:12.794 TEST_HEADER include/spdk/nvme_spec.h 00:04:12.794 TEST_HEADER include/spdk/nvme_zns.h 00:04:12.794 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:12.794 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:12.794 TEST_HEADER include/spdk/nvmf.h 00:04:12.794 TEST_HEADER include/spdk/nvmf_spec.h 00:04:12.794 TEST_HEADER include/spdk/nvmf_transport.h 00:04:12.794 TEST_HEADER include/spdk/opal.h 00:04:12.794 TEST_HEADER include/spdk/opal_spec.h 00:04:12.794 TEST_HEADER include/spdk/pci_ids.h 00:04:12.794 TEST_HEADER include/spdk/pipe.h 00:04:12.794 TEST_HEADER include/spdk/queue.h 00:04:12.794 TEST_HEADER include/spdk/reduce.h 00:04:12.794 TEST_HEADER include/spdk/rpc.h 00:04:12.794 TEST_HEADER include/spdk/scheduler.h 00:04:12.794 TEST_HEADER include/spdk/scsi.h 00:04:12.794 TEST_HEADER include/spdk/scsi_spec.h 00:04:12.794 TEST_HEADER include/spdk/sock.h 00:04:12.794 TEST_HEADER include/spdk/stdinc.h 00:04:13.053 TEST_HEADER include/spdk/string.h 00:04:13.053 TEST_HEADER include/spdk/thread.h 00:04:13.053 TEST_HEADER include/spdk/trace.h 00:04:13.053 TEST_HEADER include/spdk/trace_parser.h 00:04:13.053 TEST_HEADER include/spdk/tree.h 00:04:13.053 TEST_HEADER include/spdk/ublk.h 00:04:13.053 TEST_HEADER include/spdk/util.h 00:04:13.053 TEST_HEADER include/spdk/uuid.h 00:04:13.053 TEST_HEADER include/spdk/version.h 00:04:13.053 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.053 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.053 TEST_HEADER include/spdk/vhost.h 00:04:13.053 TEST_HEADER include/spdk/vmd.h 00:04:13.053 TEST_HEADER include/spdk/xor.h 00:04:13.053 TEST_HEADER include/spdk/zipf.h 00:04:13.053 CXX test/cpp_headers/accel.o 00:04:13.053 LINK spdk_trace_record 00:04:13.053 LINK mkfs 00:04:13.053 LINK bdev_svc 00:04:13.053 LINK hello_blob 00:04:13.053 CXX test/cpp_headers/accel_module.o 00:04:13.053 LINK hello_bdev 00:04:13.053 LINK spdk_trace 00:04:13.312 LINK bdevio 00:04:13.312 LINK accel_perf 00:04:13.312 LINK dif 00:04:13.312 CXX test/cpp_headers/assert.o 00:04:13.312 CC examples/bdev/bdevperf/bdevperf.o 00:04:13.312 CC examples/blob/cli/blobcli.o 00:04:13.571 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:13.571 CC examples/ioat/perf/perf.o 00:04:13.571 CC app/nvmf_tgt/nvmf_main.o 00:04:13.571 CXX test/cpp_headers/barrier.o 00:04:13.571 CXX test/cpp_headers/base64.o 00:04:13.571 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:13.571 CC test/dma/test_dma/test_dma.o 00:04:13.571 CXX test/cpp_headers/bdev.o 00:04:13.571 LINK nvmf_tgt 00:04:13.829 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.829 LINK ioat_perf 00:04:13.829 CC test/event/event_perf/event_perf.o 00:04:13.829 CXX test/cpp_headers/bdev_module.o 00:04:13.829 LINK nvme_fuzz 00:04:13.829 LINK blobcli 00:04:13.829 LINK mem_callbacks 00:04:13.829 LINK event_perf 00:04:14.088 LINK test_dma 00:04:14.088 CC examples/ioat/verify/verify.o 00:04:14.088 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.088 CXX test/cpp_headers/bdev_zone.o 00:04:14.088 CC test/env/vtophys/vtophys.o 00:04:14.088 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.088 CXX test/cpp_headers/bit_array.o 00:04:14.088 LINK bdevperf 00:04:14.088 CC test/event/reactor/reactor.o 00:04:14.088 CXX test/cpp_headers/bit_pool.o 00:04:14.347 LINK verify 00:04:14.347 LINK iscsi_tgt 00:04:14.347 LINK vtophys 00:04:14.347 LINK env_dpdk_post_init 00:04:14.347 LINK reactor 00:04:14.347 CXX test/cpp_headers/blob_bdev.o 00:04:14.606 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.606 CC test/lvol/esnap/esnap.o 00:04:14.606 CC app/spdk_lspci/spdk_lspci.o 00:04:14.606 CC examples/nvme/hello_world/hello_world.o 00:04:14.606 CC app/spdk_tgt/spdk_tgt.o 00:04:14.606 CC examples/nvme/reconnect/reconnect.o 00:04:14.606 CC test/env/memory/memory_ut.o 00:04:14.606 CC test/event/reactor_perf/reactor_perf.o 00:04:14.606 CXX test/cpp_headers/blobfs_bdev.o 00:04:14.606 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.606 LINK spdk_lspci 00:04:14.865 LINK reactor_perf 00:04:14.865 LINK hello_world 00:04:14.865 LINK spdk_tgt 00:04:14.865 CXX test/cpp_headers/blobfs.o 00:04:14.865 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:14.865 LINK reconnect 00:04:15.124 CXX test/cpp_headers/blob.o 00:04:15.124 CC test/event/app_repeat/app_repeat.o 00:04:15.124 CC test/event/scheduler/scheduler.o 00:04:15.124 LINK vhost_fuzz 00:04:15.124 CC app/spdk_nvme_perf/perf.o 00:04:15.124 LINK memory_ut 00:04:15.124 CXX test/cpp_headers/conf.o 00:04:15.124 LINK app_repeat 00:04:15.124 CC app/spdk_nvme_identify/identify.o 00:04:15.383 LINK iscsi_fuzz 00:04:15.383 CXX test/cpp_headers/config.o 00:04:15.383 CXX test/cpp_headers/cpuset.o 00:04:15.383 LINK scheduler 00:04:15.383 CC test/env/pci/pci_ut.o 00:04:15.643 LINK nvme_manage 00:04:15.643 CC examples/sock/hello_world/hello_sock.o 00:04:15.643 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.643 CXX test/cpp_headers/crc16.o 00:04:15.643 CXX test/cpp_headers/crc32.o 00:04:15.643 CC test/app/histogram_perf/histogram_perf.o 00:04:15.643 LINK lsvmd 00:04:15.643 CC examples/nvme/arbitration/arbitration.o 00:04:15.901 CXX test/cpp_headers/crc64.o 00:04:15.901 LINK histogram_perf 00:04:15.901 LINK hello_sock 00:04:15.901 LINK pci_ut 00:04:15.901 CC examples/vmd/led/led.o 00:04:15.901 CC examples/nvmf/nvmf/nvmf.o 00:04:15.901 CXX test/cpp_headers/dif.o 00:04:16.160 CC test/app/jsoncat/jsoncat.o 00:04:16.160 LINK spdk_nvme_perf 00:04:16.160 CC app/spdk_nvme_discover/discovery_aer.o 00:04:16.160 LINK led 00:04:16.160 LINK arbitration 00:04:16.160 CC app/spdk_top/spdk_top.o 00:04:16.160 CXX test/cpp_headers/dma.o 00:04:16.160 LINK jsoncat 00:04:16.160 LINK spdk_nvme_identify 00:04:16.419 LINK nvmf 00:04:16.419 LINK spdk_nvme_discover 00:04:16.419 CC app/vhost/vhost.o 00:04:16.419 CXX test/cpp_headers/endian.o 00:04:16.419 CC app/spdk_dd/spdk_dd.o 00:04:16.419 CC test/app/stub/stub.o 00:04:16.419 CC examples/nvme/hotplug/hotplug.o 00:04:16.695 CXX test/cpp_headers/env_dpdk.o 00:04:16.695 CC app/fio/nvme/fio_plugin.o 00:04:16.695 LINK vhost 00:04:16.695 LINK stub 00:04:16.695 CC examples/util/zipf/zipf.o 00:04:16.695 CXX test/cpp_headers/env.o 00:04:16.695 CC test/nvme/aer/aer.o 00:04:16.695 LINK hotplug 00:04:16.963 CXX test/cpp_headers/event.o 00:04:16.963 LINK spdk_dd 00:04:16.963 LINK zipf 00:04:16.963 CC test/rpc_client/rpc_client_test.o 00:04:16.963 CC test/nvme/reset/reset.o 00:04:16.963 CXX test/cpp_headers/fd_group.o 00:04:16.963 LINK aer 00:04:16.963 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.963 CXX test/cpp_headers/fd.o 00:04:16.963 CXX test/cpp_headers/file.o 00:04:17.222 LINK spdk_top 00:04:17.222 LINK rpc_client_test 00:04:17.222 LINK spdk_nvme 00:04:17.222 CXX test/cpp_headers/ftl.o 00:04:17.222 CXX test/cpp_headers/gpt_spec.o 00:04:17.222 CXX test/cpp_headers/hexlify.o 00:04:17.222 LINK cmb_copy 00:04:17.222 LINK reset 00:04:17.222 CC test/nvme/sgl/sgl.o 00:04:17.222 CXX test/cpp_headers/histogram_data.o 00:04:17.222 CXX test/cpp_headers/idxd.o 00:04:17.222 CC app/fio/bdev/fio_plugin.o 00:04:17.480 CXX test/cpp_headers/idxd_spec.o 00:04:17.480 CC examples/nvme/abort/abort.o 00:04:17.480 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:17.480 CXX test/cpp_headers/init.o 00:04:17.480 CC test/nvme/e2edp/nvme_dp.o 00:04:17.480 CC test/nvme/overhead/overhead.o 00:04:17.480 CC test/thread/poller_perf/poller_perf.o 00:04:17.480 LINK sgl 00:04:17.739 CXX test/cpp_headers/ioat.o 00:04:17.739 LINK pmr_persistence 00:04:17.739 LINK poller_perf 00:04:17.739 CC test/nvme/err_injection/err_injection.o 00:04:17.739 LINK nvme_dp 00:04:17.739 CC examples/thread/thread/thread_ex.o 00:04:17.739 LINK overhead 00:04:17.739 CXX test/cpp_headers/ioat_spec.o 00:04:17.998 LINK spdk_bdev 00:04:17.998 LINK abort 00:04:17.998 CC test/nvme/startup/startup.o 00:04:17.998 LINK err_injection 00:04:17.998 CC examples/idxd/perf/perf.o 00:04:17.998 CXX test/cpp_headers/iscsi_spec.o 00:04:17.998 CXX test/cpp_headers/json.o 00:04:17.998 CC test/nvme/reserve/reserve.o 00:04:17.998 LINK thread 00:04:17.998 CC test/nvme/simple_copy/simple_copy.o 00:04:17.998 CXX test/cpp_headers/jsonrpc.o 00:04:17.998 LINK startup 00:04:18.257 CXX test/cpp_headers/likely.o 00:04:18.257 CC test/nvme/connect_stress/connect_stress.o 00:04:18.257 CXX test/cpp_headers/log.o 00:04:18.257 LINK reserve 00:04:18.257 CXX test/cpp_headers/lvol.o 00:04:18.257 CXX test/cpp_headers/memory.o 00:04:18.257 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:18.257 LINK simple_copy 00:04:18.257 LINK idxd_perf 00:04:18.516 CC test/nvme/boot_partition/boot_partition.o 00:04:18.516 CXX test/cpp_headers/mmio.o 00:04:18.516 LINK connect_stress 00:04:18.516 CXX test/cpp_headers/nbd.o 00:04:18.516 LINK interrupt_tgt 00:04:18.516 CC test/nvme/compliance/nvme_compliance.o 00:04:18.516 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.516 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.516 CXX test/cpp_headers/notify.o 00:04:18.516 CC test/nvme/fdp/fdp.o 00:04:18.516 CXX test/cpp_headers/nvme.o 00:04:18.516 LINK boot_partition 00:04:18.775 CC test/nvme/cuse/cuse.o 00:04:18.775 CXX test/cpp_headers/nvme_intel.o 00:04:18.775 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.775 LINK doorbell_aers 00:04:18.775 LINK fused_ordering 00:04:18.775 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.775 CXX test/cpp_headers/nvme_spec.o 00:04:18.775 LINK nvme_compliance 00:04:18.775 CXX test/cpp_headers/nvme_zns.o 00:04:19.033 LINK fdp 00:04:19.033 CXX test/cpp_headers/nvmf_cmd.o 00:04:19.033 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:19.033 CXX test/cpp_headers/nvmf.o 00:04:19.033 CXX test/cpp_headers/nvmf_spec.o 00:04:19.033 CXX test/cpp_headers/nvmf_transport.o 00:04:19.033 CXX test/cpp_headers/opal.o 00:04:19.033 CXX test/cpp_headers/opal_spec.o 00:04:19.033 CXX test/cpp_headers/pci_ids.o 00:04:19.033 CXX test/cpp_headers/pipe.o 00:04:19.033 CXX test/cpp_headers/queue.o 00:04:19.033 CXX test/cpp_headers/reduce.o 00:04:19.033 CXX test/cpp_headers/rpc.o 00:04:19.033 CXX test/cpp_headers/scheduler.o 00:04:19.291 CXX test/cpp_headers/scsi.o 00:04:19.291 CXX test/cpp_headers/scsi_spec.o 00:04:19.291 CXX test/cpp_headers/sock.o 00:04:19.291 CXX test/cpp_headers/stdinc.o 00:04:19.291 CXX test/cpp_headers/string.o 00:04:19.291 CXX test/cpp_headers/thread.o 00:04:19.291 CXX test/cpp_headers/trace.o 00:04:19.291 CXX test/cpp_headers/trace_parser.o 00:04:19.549 CXX test/cpp_headers/tree.o 00:04:19.549 CXX test/cpp_headers/ublk.o 00:04:19.549 CXX test/cpp_headers/util.o 00:04:19.549 CXX test/cpp_headers/uuid.o 00:04:19.549 CXX test/cpp_headers/version.o 00:04:19.549 CXX test/cpp_headers/vfio_user_pci.o 00:04:19.549 CXX test/cpp_headers/vfio_user_spec.o 00:04:19.549 CXX test/cpp_headers/vhost.o 00:04:19.549 LINK esnap 00:04:19.549 CXX test/cpp_headers/vmd.o 00:04:19.549 CXX test/cpp_headers/xor.o 00:04:19.549 CXX test/cpp_headers/zipf.o 00:04:19.807 LINK cuse 00:04:20.065 00:04:20.065 real 0m50.620s 00:04:20.065 user 4m59.646s 00:04:20.065 sys 0m56.677s 00:04:20.065 ************************************ 00:04:20.065 END TEST make 00:04:20.065 ************************************ 00:04:20.065 21:11:43 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:20.065 21:11:43 -- common/autotest_common.sh@10 -- $ set +x 00:04:20.065 21:11:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:20.065 21:11:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:20.065 21:11:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:20.324 21:11:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:20.324 21:11:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:20.324 21:11:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:20.324 21:11:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:20.324 21:11:43 -- scripts/common.sh@335 -- # IFS=.-: 00:04:20.324 21:11:43 -- scripts/common.sh@335 -- # read -ra ver1 00:04:20.324 21:11:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.324 21:11:43 -- scripts/common.sh@336 -- # read -ra ver2 00:04:20.324 21:11:43 -- scripts/common.sh@337 -- # local 'op=<' 00:04:20.324 21:11:43 -- scripts/common.sh@339 -- # ver1_l=2 00:04:20.324 21:11:43 -- scripts/common.sh@340 -- # ver2_l=1 00:04:20.324 21:11:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:20.324 21:11:43 -- scripts/common.sh@343 -- # case "$op" in 00:04:20.324 21:11:43 -- scripts/common.sh@344 -- # : 1 00:04:20.324 21:11:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:20.324 21:11:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.325 21:11:43 -- scripts/common.sh@364 -- # decimal 1 00:04:20.325 21:11:43 -- scripts/common.sh@352 -- # local d=1 00:04:20.325 21:11:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.325 21:11:43 -- scripts/common.sh@354 -- # echo 1 00:04:20.325 21:11:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:20.325 21:11:43 -- scripts/common.sh@365 -- # decimal 2 00:04:20.325 21:11:43 -- scripts/common.sh@352 -- # local d=2 00:04:20.325 21:11:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.325 21:11:43 -- scripts/common.sh@354 -- # echo 2 00:04:20.325 21:11:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:20.325 21:11:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:20.325 21:11:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:20.325 21:11:43 -- scripts/common.sh@367 -- # return 0 00:04:20.325 21:11:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.325 21:11:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:20.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.325 --rc genhtml_branch_coverage=1 00:04:20.325 --rc genhtml_function_coverage=1 00:04:20.325 --rc genhtml_legend=1 00:04:20.325 --rc geninfo_all_blocks=1 00:04:20.325 --rc geninfo_unexecuted_blocks=1 00:04:20.325 00:04:20.325 ' 00:04:20.325 21:11:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:20.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.325 --rc genhtml_branch_coverage=1 00:04:20.325 --rc genhtml_function_coverage=1 00:04:20.325 --rc genhtml_legend=1 00:04:20.325 --rc geninfo_all_blocks=1 00:04:20.325 --rc geninfo_unexecuted_blocks=1 00:04:20.325 00:04:20.325 ' 00:04:20.325 21:11:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:20.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.325 --rc genhtml_branch_coverage=1 00:04:20.325 --rc genhtml_function_coverage=1 00:04:20.325 --rc genhtml_legend=1 00:04:20.325 --rc geninfo_all_blocks=1 00:04:20.325 --rc geninfo_unexecuted_blocks=1 00:04:20.325 00:04:20.325 ' 00:04:20.325 21:11:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:20.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.325 --rc genhtml_branch_coverage=1 00:04:20.325 --rc genhtml_function_coverage=1 00:04:20.325 --rc genhtml_legend=1 00:04:20.325 --rc geninfo_all_blocks=1 00:04:20.325 --rc geninfo_unexecuted_blocks=1 00:04:20.325 00:04:20.325 ' 00:04:20.325 21:11:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:20.325 21:11:43 -- nvmf/common.sh@7 -- # uname -s 00:04:20.325 21:11:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.325 21:11:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.325 21:11:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.325 21:11:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.325 21:11:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.325 21:11:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.325 21:11:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.325 21:11:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.325 21:11:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.325 21:11:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.325 21:11:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:04:20.325 21:11:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:04:20.325 21:11:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.325 21:11:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.325 21:11:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:20.325 21:11:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:20.325 21:11:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.325 21:11:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.325 21:11:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.325 21:11:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.325 21:11:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.325 21:11:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.325 21:11:43 -- paths/export.sh@5 -- # export PATH 00:04:20.325 21:11:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.325 21:11:43 -- nvmf/common.sh@46 -- # : 0 00:04:20.325 21:11:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:20.325 21:11:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:20.325 21:11:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:20.325 21:11:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.325 21:11:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.325 21:11:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:20.325 21:11:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:20.325 21:11:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:20.325 21:11:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:20.325 21:11:43 -- spdk/autotest.sh@32 -- # uname -s 00:04:20.325 21:11:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:20.325 21:11:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:20.325 21:11:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:20.325 21:11:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:20.325 21:11:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:20.325 21:11:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:20.325 21:11:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:20.325 21:11:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:20.325 21:11:43 -- spdk/autotest.sh@48 -- # udevadm_pid=59794 00:04:20.325 21:11:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:20.325 21:11:43 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:20.325 21:11:43 -- spdk/autotest.sh@54 -- # echo 59801 00:04:20.325 21:11:43 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:20.325 21:11:43 -- spdk/autotest.sh@56 -- # echo 59806 00:04:20.325 21:11:43 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:20.325 21:11:43 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:20.325 21:11:43 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:20.325 21:11:43 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:20.325 21:11:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.325 21:11:43 -- common/autotest_common.sh@10 -- # set +x 00:04:20.325 21:11:43 -- spdk/autotest.sh@70 -- # create_test_list 00:04:20.325 21:11:43 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:20.325 21:11:43 -- common/autotest_common.sh@10 -- # set +x 00:04:20.325 21:11:43 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:20.325 21:11:43 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:20.325 21:11:43 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:20.325 21:11:43 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:20.325 21:11:43 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:20.325 21:11:43 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:20.325 21:11:43 -- common/autotest_common.sh@1450 -- # uname 00:04:20.325 21:11:43 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:20.325 21:11:43 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:20.325 21:11:43 -- common/autotest_common.sh@1470 -- # uname 00:04:20.325 21:11:43 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:20.325 21:11:43 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:20.325 21:11:43 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:20.325 lcov: LCOV version 1.15 00:04:20.325 21:11:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:28.435 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:28.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:28.435 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:28.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:28.435 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:28.435 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:50.364 21:12:12 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:50.364 21:12:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.364 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:04:50.364 21:12:12 -- spdk/autotest.sh@89 -- # rm -f 00:04:50.364 21:12:12 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.364 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:50.364 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:50.364 21:12:13 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:50.364 21:12:13 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:50.364 21:12:13 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:50.364 21:12:13 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:50.364 21:12:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:50.364 21:12:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:50.364 21:12:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:50.364 21:12:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:50.364 21:12:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2 00:04:50.364 21:12:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n2 00:04:50.364 21:12:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:50.364 21:12:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3 00:04:50.364 21:12:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n3 00:04:50.364 21:12:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:50.364 21:12:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:50.364 21:12:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:50.364 21:12:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:50.364 21:12:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:50.364 21:12:13 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:50.364 21:12:13 -- spdk/autotest.sh@108 -- # grep -v p 00:04:50.364 21:12:13 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme1n1 00:04:50.364 21:12:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:50.364 21:12:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:50.364 21:12:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:50.364 21:12:13 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:50.364 21:12:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:50.364 No valid GPT data, bailing 00:04:50.364 21:12:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.364 21:12:13 -- scripts/common.sh@393 -- # pt= 00:04:50.364 21:12:13 -- scripts/common.sh@394 -- # return 1 00:04:50.364 21:12:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:50.364 1+0 records in 00:04:50.364 1+0 records out 00:04:50.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487848 s, 215 MB/s 00:04:50.364 21:12:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:50.364 21:12:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:50.364 21:12:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n2 00:04:50.364 21:12:13 -- scripts/common.sh@380 -- # local block=/dev/nvme0n2 pt 00:04:50.364 21:12:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:50.364 No valid GPT data, bailing 00:04:50.364 21:12:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:50.364 21:12:13 -- scripts/common.sh@393 -- # pt= 00:04:50.364 21:12:13 -- scripts/common.sh@394 -- # return 1 00:04:50.364 21:12:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:50.364 1+0 records in 00:04:50.364 1+0 records out 00:04:50.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446899 s, 235 MB/s 00:04:50.365 21:12:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:50.365 21:12:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:50.365 21:12:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n3 00:04:50.365 21:12:13 -- scripts/common.sh@380 -- # local block=/dev/nvme0n3 pt 00:04:50.365 21:12:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:50.365 No valid GPT data, bailing 00:04:50.365 21:12:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:50.365 21:12:13 -- scripts/common.sh@393 -- # pt= 00:04:50.365 21:12:13 -- scripts/common.sh@394 -- # return 1 00:04:50.365 21:12:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:50.365 1+0 records in 00:04:50.365 1+0 records out 00:04:50.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387044 s, 271 MB/s 00:04:50.365 21:12:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:50.365 21:12:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:50.365 21:12:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:50.365 21:12:13 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:50.365 21:12:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:50.365 No valid GPT data, bailing 00:04:50.365 21:12:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:50.365 21:12:13 -- scripts/common.sh@393 -- # pt= 00:04:50.365 21:12:13 -- scripts/common.sh@394 -- # return 1 00:04:50.365 21:12:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:50.365 1+0 records in 00:04:50.365 1+0 records out 00:04:50.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421715 s, 249 MB/s 00:04:50.365 21:12:13 -- spdk/autotest.sh@116 -- # sync 00:04:50.365 21:12:13 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:50.365 21:12:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:50.365 21:12:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:51.742 21:12:15 -- spdk/autotest.sh@122 -- # uname -s 00:04:51.742 21:12:15 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:51.742 21:12:15 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:51.742 21:12:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.742 21:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.742 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:51.742 ************************************ 00:04:51.742 START TEST setup.sh 00:04:51.742 ************************************ 00:04:51.742 21:12:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:51.742 * Looking for test storage... 00:04:51.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.742 21:12:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.742 21:12:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.742 21:12:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.002 21:12:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.002 21:12:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.002 21:12:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.002 21:12:15 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.002 21:12:15 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.002 21:12:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.002 21:12:15 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.002 21:12:15 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.002 21:12:15 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.002 21:12:15 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.002 21:12:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.002 21:12:15 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.002 21:12:15 -- scripts/common.sh@344 -- # : 1 00:04:52.002 21:12:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.002 21:12:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.002 21:12:15 -- scripts/common.sh@364 -- # decimal 1 00:04:52.002 21:12:15 -- scripts/common.sh@352 -- # local d=1 00:04:52.002 21:12:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.002 21:12:15 -- scripts/common.sh@354 -- # echo 1 00:04:52.002 21:12:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.002 21:12:15 -- scripts/common.sh@365 -- # decimal 2 00:04:52.002 21:12:15 -- scripts/common.sh@352 -- # local d=2 00:04:52.002 21:12:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.002 21:12:15 -- scripts/common.sh@354 -- # echo 2 00:04:52.002 21:12:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.002 21:12:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.002 21:12:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.002 21:12:15 -- scripts/common.sh@367 -- # return 0 00:04:52.002 21:12:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.002 --rc genhtml_branch_coverage=1 00:04:52.002 --rc genhtml_function_coverage=1 00:04:52.002 --rc genhtml_legend=1 00:04:52.002 --rc geninfo_all_blocks=1 00:04:52.002 --rc geninfo_unexecuted_blocks=1 00:04:52.002 00:04:52.002 ' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.002 --rc genhtml_branch_coverage=1 00:04:52.002 --rc genhtml_function_coverage=1 00:04:52.002 --rc genhtml_legend=1 00:04:52.002 --rc geninfo_all_blocks=1 00:04:52.002 --rc geninfo_unexecuted_blocks=1 00:04:52.002 00:04:52.002 ' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.002 --rc genhtml_branch_coverage=1 00:04:52.002 --rc genhtml_function_coverage=1 00:04:52.002 --rc genhtml_legend=1 00:04:52.002 --rc geninfo_all_blocks=1 00:04:52.002 --rc geninfo_unexecuted_blocks=1 00:04:52.002 00:04:52.002 ' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.002 --rc genhtml_branch_coverage=1 00:04:52.002 --rc genhtml_function_coverage=1 00:04:52.002 --rc genhtml_legend=1 00:04:52.002 --rc geninfo_all_blocks=1 00:04:52.002 --rc geninfo_unexecuted_blocks=1 00:04:52.002 00:04:52.002 ' 00:04:52.002 21:12:15 -- setup/test-setup.sh@10 -- # uname -s 00:04:52.002 21:12:15 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:52.002 21:12:15 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:52.002 21:12:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.002 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.002 ************************************ 00:04:52.002 START TEST acl 00:04:52.002 ************************************ 00:04:52.002 21:12:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:52.002 * Looking for test storage... 00:04:52.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:52.002 21:12:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.002 21:12:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.002 21:12:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.002 21:12:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.002 21:12:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.002 21:12:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.002 21:12:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.002 21:12:15 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.002 21:12:15 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.002 21:12:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.002 21:12:15 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.002 21:12:15 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.002 21:12:15 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.002 21:12:15 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.002 21:12:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.002 21:12:15 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.002 21:12:15 -- scripts/common.sh@344 -- # : 1 00:04:52.002 21:12:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.002 21:12:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.002 21:12:15 -- scripts/common.sh@364 -- # decimal 1 00:04:52.002 21:12:15 -- scripts/common.sh@352 -- # local d=1 00:04:52.002 21:12:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.002 21:12:15 -- scripts/common.sh@354 -- # echo 1 00:04:52.002 21:12:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.002 21:12:15 -- scripts/common.sh@365 -- # decimal 2 00:04:52.002 21:12:15 -- scripts/common.sh@352 -- # local d=2 00:04:52.002 21:12:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.002 21:12:15 -- scripts/common.sh@354 -- # echo 2 00:04:52.002 21:12:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.002 21:12:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.002 21:12:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.002 21:12:15 -- scripts/common.sh@367 -- # return 0 00:04:52.002 21:12:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.003 21:12:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.003 --rc genhtml_branch_coverage=1 00:04:52.003 --rc genhtml_function_coverage=1 00:04:52.003 --rc genhtml_legend=1 00:04:52.003 --rc geninfo_all_blocks=1 00:04:52.003 --rc geninfo_unexecuted_blocks=1 00:04:52.003 00:04:52.003 ' 00:04:52.003 21:12:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.003 --rc genhtml_branch_coverage=1 00:04:52.003 --rc genhtml_function_coverage=1 00:04:52.003 --rc genhtml_legend=1 00:04:52.003 --rc geninfo_all_blocks=1 00:04:52.003 --rc geninfo_unexecuted_blocks=1 00:04:52.003 00:04:52.003 ' 00:04:52.003 21:12:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.003 --rc genhtml_branch_coverage=1 00:04:52.003 --rc genhtml_function_coverage=1 00:04:52.003 --rc genhtml_legend=1 00:04:52.003 --rc geninfo_all_blocks=1 00:04:52.003 --rc geninfo_unexecuted_blocks=1 00:04:52.003 00:04:52.003 ' 00:04:52.003 21:12:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.003 --rc genhtml_branch_coverage=1 00:04:52.003 --rc genhtml_function_coverage=1 00:04:52.003 --rc genhtml_legend=1 00:04:52.003 --rc geninfo_all_blocks=1 00:04:52.003 --rc geninfo_unexecuted_blocks=1 00:04:52.003 00:04:52.003 ' 00:04:52.003 21:12:15 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:52.003 21:12:15 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:52.003 21:12:15 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:52.003 21:12:15 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:52.003 21:12:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.003 21:12:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:52.003 21:12:15 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:52.003 21:12:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.003 21:12:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2 00:04:52.003 21:12:15 -- common/autotest_common.sh@1657 -- # local device=nvme0n2 00:04:52.003 21:12:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.003 21:12:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3 00:04:52.003 21:12:15 -- common/autotest_common.sh@1657 -- # local device=nvme0n3 00:04:52.003 21:12:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.003 21:12:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:52.003 21:12:15 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:52.003 21:12:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.003 21:12:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.003 21:12:15 -- setup/acl.sh@12 -- # devs=() 00:04:52.003 21:12:15 -- setup/acl.sh@12 -- # declare -a devs 00:04:52.003 21:12:15 -- setup/acl.sh@13 -- # drivers=() 00:04:52.003 21:12:15 -- setup/acl.sh@13 -- # declare -A drivers 00:04:52.003 21:12:15 -- setup/acl.sh@51 -- # setup reset 00:04:52.003 21:12:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.003 21:12:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.940 21:12:16 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:52.940 21:12:16 -- setup/acl.sh@16 -- # local dev driver 00:04:52.940 21:12:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.940 21:12:16 -- setup/acl.sh@15 -- # setup output status 00:04:52.940 21:12:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.940 21:12:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:52.940 Hugepages 00:04:52.940 node hugesize free / total 00:04:52.940 21:12:16 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:52.940 21:12:16 -- setup/acl.sh@19 -- # continue 00:04:52.940 21:12:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.940 00:04:52.940 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.940 21:12:16 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:52.940 21:12:16 -- setup/acl.sh@19 -- # continue 00:04:52.940 21:12:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.940 21:12:16 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:52.940 21:12:16 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:52.940 21:12:16 -- setup/acl.sh@20 -- # continue 00:04:52.940 21:12:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.199 21:12:16 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:53.199 21:12:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:53.199 21:12:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:53.199 21:12:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:53.199 21:12:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:53.199 21:12:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.199 21:12:16 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:53.199 21:12:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:53.199 21:12:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:53.199 21:12:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:53.199 21:12:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:53.199 21:12:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.199 21:12:16 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:53.199 21:12:16 -- setup/acl.sh@54 -- # run_test denied denied 00:04:53.199 21:12:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.199 21:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.199 21:12:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.199 ************************************ 00:04:53.199 START TEST denied 00:04:53.199 ************************************ 00:04:53.199 21:12:16 -- common/autotest_common.sh@1114 -- # denied 00:04:53.199 21:12:16 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:53.199 21:12:16 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:53.199 21:12:16 -- setup/acl.sh@38 -- # setup output config 00:04:53.199 21:12:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.199 21:12:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.137 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:54.137 21:12:17 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:54.137 21:12:17 -- setup/acl.sh@28 -- # local dev driver 00:04:54.137 21:12:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:54.137 21:12:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:54.137 21:12:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:54.137 21:12:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:54.137 21:12:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:54.137 21:12:17 -- setup/acl.sh@41 -- # setup reset 00:04:54.137 21:12:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.137 21:12:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.706 00:04:54.706 real 0m1.400s 00:04:54.706 user 0m0.578s 00:04:54.706 sys 0m0.793s 00:04:54.706 21:12:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.706 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:04:54.706 ************************************ 00:04:54.706 END TEST denied 00:04:54.706 ************************************ 00:04:54.706 21:12:18 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:54.706 21:12:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.706 21:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.706 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:04:54.706 ************************************ 00:04:54.706 START TEST allowed 00:04:54.706 ************************************ 00:04:54.706 21:12:18 -- common/autotest_common.sh@1114 -- # allowed 00:04:54.706 21:12:18 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.706 21:12:18 -- setup/acl.sh@45 -- # setup output config 00:04:54.706 21:12:18 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:54.706 21:12:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.706 21:12:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.644 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.644 21:12:19 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:55.644 21:12:19 -- setup/acl.sh@28 -- # local dev driver 00:04:55.644 21:12:19 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:55.644 21:12:19 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:55.644 21:12:19 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:55.644 21:12:19 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:55.644 21:12:19 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:55.644 21:12:19 -- setup/acl.sh@48 -- # setup reset 00:04:55.644 21:12:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.644 21:12:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.212 00:04:56.212 real 0m1.495s 00:04:56.212 user 0m0.677s 00:04:56.212 sys 0m0.821s 00:04:56.212 21:12:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.212 21:12:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.212 ************************************ 00:04:56.212 END TEST allowed 00:04:56.212 ************************************ 00:04:56.212 00:04:56.212 real 0m4.237s 00:04:56.212 user 0m1.931s 00:04:56.212 sys 0m2.318s 00:04:56.212 21:12:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.212 21:12:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.212 ************************************ 00:04:56.212 END TEST acl 00:04:56.212 ************************************ 00:04:56.212 21:12:19 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:56.212 21:12:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.212 21:12:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.212 21:12:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.212 ************************************ 00:04:56.212 START TEST hugepages 00:04:56.212 ************************************ 00:04:56.212 21:12:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:56.212 * Looking for test storage... 00:04:56.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:56.212 21:12:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.212 21:12:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.212 21:12:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.471 21:12:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.471 21:12:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.471 21:12:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.471 21:12:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.472 21:12:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.472 21:12:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.472 21:12:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.472 21:12:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.472 21:12:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.472 21:12:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.472 21:12:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.472 21:12:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.472 21:12:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.472 21:12:19 -- scripts/common.sh@344 -- # : 1 00:04:56.472 21:12:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.472 21:12:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.472 21:12:19 -- scripts/common.sh@364 -- # decimal 1 00:04:56.472 21:12:19 -- scripts/common.sh@352 -- # local d=1 00:04:56.472 21:12:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.472 21:12:19 -- scripts/common.sh@354 -- # echo 1 00:04:56.472 21:12:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.472 21:12:19 -- scripts/common.sh@365 -- # decimal 2 00:04:56.472 21:12:20 -- scripts/common.sh@352 -- # local d=2 00:04:56.472 21:12:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.472 21:12:20 -- scripts/common.sh@354 -- # echo 2 00:04:56.472 21:12:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.472 21:12:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.472 21:12:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.472 21:12:20 -- scripts/common.sh@367 -- # return 0 00:04:56.472 21:12:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.472 21:12:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.472 --rc genhtml_branch_coverage=1 00:04:56.472 --rc genhtml_function_coverage=1 00:04:56.472 --rc genhtml_legend=1 00:04:56.472 --rc geninfo_all_blocks=1 00:04:56.472 --rc geninfo_unexecuted_blocks=1 00:04:56.472 00:04:56.472 ' 00:04:56.472 21:12:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.472 --rc genhtml_branch_coverage=1 00:04:56.472 --rc genhtml_function_coverage=1 00:04:56.472 --rc genhtml_legend=1 00:04:56.472 --rc geninfo_all_blocks=1 00:04:56.472 --rc geninfo_unexecuted_blocks=1 00:04:56.472 00:04:56.472 ' 00:04:56.472 21:12:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.472 --rc genhtml_branch_coverage=1 00:04:56.472 --rc genhtml_function_coverage=1 00:04:56.472 --rc genhtml_legend=1 00:04:56.472 --rc geninfo_all_blocks=1 00:04:56.472 --rc geninfo_unexecuted_blocks=1 00:04:56.472 00:04:56.472 ' 00:04:56.472 21:12:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.472 --rc genhtml_branch_coverage=1 00:04:56.472 --rc genhtml_function_coverage=1 00:04:56.472 --rc genhtml_legend=1 00:04:56.472 --rc geninfo_all_blocks=1 00:04:56.472 --rc geninfo_unexecuted_blocks=1 00:04:56.472 00:04:56.472 ' 00:04:56.472 21:12:20 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:56.472 21:12:20 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:56.472 21:12:20 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:56.472 21:12:20 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:56.472 21:12:20 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:56.472 21:12:20 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:56.472 21:12:20 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:56.472 21:12:20 -- setup/common.sh@18 -- # local node= 00:04:56.472 21:12:20 -- setup/common.sh@19 -- # local var val 00:04:56.472 21:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.472 21:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.472 21:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.472 21:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.472 21:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.472 21:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4836204 kB' 'MemAvailable: 7337608 kB' 'Buffers: 2684 kB' 'Cached: 2705980 kB' 'SwapCached: 0 kB' 'Active: 456056 kB' 'Inactive: 2370308 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 51136 kB' 'Shmem: 10512 kB' 'KReclaimable: 80396 kB' 'Slab: 180104 kB' 'SReclaimable: 80396 kB' 'SUnreclaim: 99708 kB' 'KernelStack: 6896 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 321076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.472 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # continue 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:12:20 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.474 21:12:20 -- setup/common.sh@33 -- # echo 2048 00:04:56.474 21:12:20 -- setup/common.sh@33 -- # return 0 00:04:56.474 21:12:20 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:56.474 21:12:20 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:56.474 21:12:20 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:56.474 21:12:20 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:56.474 21:12:20 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:56.474 21:12:20 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:56.474 21:12:20 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:56.474 21:12:20 -- setup/hugepages.sh@207 -- # get_nodes 00:04:56.474 21:12:20 -- setup/hugepages.sh@27 -- # local node 00:04:56.474 21:12:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.474 21:12:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:56.474 21:12:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.474 21:12:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.475 21:12:20 -- setup/hugepages.sh@208 -- # clear_hp 00:04:56.475 21:12:20 -- setup/hugepages.sh@37 -- # local node hp 00:04:56.475 21:12:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.475 21:12:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.475 21:12:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.475 21:12:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.475 21:12:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.475 21:12:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:56.475 21:12:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:56.475 21:12:20 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:56.475 21:12:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.475 21:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.475 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:56.475 ************************************ 00:04:56.475 START TEST default_setup 00:04:56.475 ************************************ 00:04:56.475 21:12:20 -- common/autotest_common.sh@1114 -- # default_setup 00:04:56.475 21:12:20 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:56.475 21:12:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.475 21:12:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.475 21:12:20 -- setup/hugepages.sh@51 -- # shift 00:04:56.475 21:12:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.475 21:12:20 -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.475 21:12:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.475 21:12:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.475 21:12:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.475 21:12:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.475 21:12:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.475 21:12:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.475 21:12:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.475 21:12:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.475 21:12:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.475 21:12:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.475 21:12:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.475 21:12:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.475 21:12:20 -- setup/hugepages.sh@73 -- # return 0 00:04:56.475 21:12:20 -- setup/hugepages.sh@137 -- # setup output 00:04:56.475 21:12:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.475 21:12:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.042 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.303 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.303 21:12:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:57.303 21:12:20 -- setup/hugepages.sh@89 -- # local node 00:04:57.303 21:12:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.303 21:12:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.303 21:12:20 -- setup/hugepages.sh@92 -- # local surp 00:04:57.303 21:12:20 -- setup/hugepages.sh@93 -- # local resv 00:04:57.303 21:12:20 -- setup/hugepages.sh@94 -- # local anon 00:04:57.303 21:12:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.303 21:12:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.303 21:12:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.303 21:12:20 -- setup/common.sh@18 -- # local node= 00:04:57.303 21:12:20 -- setup/common.sh@19 -- # local var val 00:04:57.303 21:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.303 21:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.303 21:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.303 21:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.303 21:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.303 21:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.303 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6892796 kB' 'MemAvailable: 9394044 kB' 'Buffers: 2684 kB' 'Cached: 2705968 kB' 'SwapCached: 0 kB' 'Active: 457216 kB' 'Inactive: 2370316 kB' 'Active(anon): 129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120684 kB' 'Mapped: 51068 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 179916 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99844 kB' 'KernelStack: 6784 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.304 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.304 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.305 21:12:20 -- setup/common.sh@33 -- # echo 0 00:04:57.305 21:12:20 -- setup/common.sh@33 -- # return 0 00:04:57.305 21:12:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.305 21:12:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.305 21:12:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.305 21:12:20 -- setup/common.sh@18 -- # local node= 00:04:57.305 21:12:20 -- setup/common.sh@19 -- # local var val 00:04:57.305 21:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.305 21:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.305 21:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.305 21:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.305 21:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.305 21:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6893048 kB' 'MemAvailable: 9394300 kB' 'Buffers: 2684 kB' 'Cached: 2705968 kB' 'SwapCached: 0 kB' 'Active: 457408 kB' 'Inactive: 2370320 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120352 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 179920 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99848 kB' 'KernelStack: 6800 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.305 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.305 21:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.306 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.306 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.307 21:12:20 -- setup/common.sh@33 -- # echo 0 00:04:57.307 21:12:20 -- setup/common.sh@33 -- # return 0 00:04:57.307 21:12:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.307 21:12:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.307 21:12:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.307 21:12:20 -- setup/common.sh@18 -- # local node= 00:04:57.307 21:12:20 -- setup/common.sh@19 -- # local var val 00:04:57.307 21:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.307 21:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.307 21:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.307 21:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.307 21:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.307 21:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6893048 kB' 'MemAvailable: 9394300 kB' 'Buffers: 2684 kB' 'Cached: 2705968 kB' 'SwapCached: 0 kB' 'Active: 457212 kB' 'Inactive: 2370320 kB' 'Active(anon): 129368 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120468 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 179908 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99836 kB' 'KernelStack: 6816 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.307 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.307 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.308 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.308 21:12:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.308 21:12:20 -- setup/common.sh@33 -- # echo 0 00:04:57.308 21:12:20 -- setup/common.sh@33 -- # return 0 00:04:57.308 nr_hugepages=1024 00:04:57.308 resv_hugepages=0 00:04:57.308 21:12:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.308 21:12:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.308 21:12:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.308 surplus_hugepages=0 00:04:57.308 21:12:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.308 anon_hugepages=0 00:04:57.308 21:12:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.309 21:12:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.309 21:12:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.309 21:12:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.309 21:12:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.309 21:12:20 -- setup/common.sh@18 -- # local node= 00:04:57.309 21:12:20 -- setup/common.sh@19 -- # local var val 00:04:57.309 21:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.309 21:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.309 21:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.309 21:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.309 21:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.309 21:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.309 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6893048 kB' 'MemAvailable: 9394300 kB' 'Buffers: 2684 kB' 'Cached: 2705968 kB' 'SwapCached: 0 kB' 'Active: 457116 kB' 'Inactive: 2370320 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120356 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 179900 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99828 kB' 'KernelStack: 6800 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:57.309 21:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:20 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.309 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.309 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.310 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.310 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.311 21:12:21 -- setup/common.sh@33 -- # echo 1024 00:04:57.311 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:57.311 21:12:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.311 21:12:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.311 21:12:21 -- setup/hugepages.sh@27 -- # local node 00:04:57.311 21:12:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.311 21:12:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.311 21:12:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.311 21:12:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.311 21:12:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.311 21:12:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.311 21:12:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.311 21:12:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.311 21:12:21 -- setup/common.sh@18 -- # local node=0 00:04:57.311 21:12:21 -- setup/common.sh@19 -- # local var val 00:04:57.311 21:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.311 21:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.311 21:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.311 21:12:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.311 21:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.311 21:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.311 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.311 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.311 21:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6892796 kB' 'MemUsed: 5346316 kB' 'SwapCached: 0 kB' 'Active: 457148 kB' 'Inactive: 2370320 kB' 'Active(anon): 129304 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2708652 kB' 'Mapped: 50972 kB' 'AnonPages: 120172 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80072 kB' 'Slab: 179896 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.311 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.311 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.311 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.311 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.311 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.570 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.570 21:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.571 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.571 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.571 21:12:21 -- setup/common.sh@33 -- # echo 0 00:04:57.571 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:57.571 node0=1024 expecting 1024 00:04:57.571 ************************************ 00:04:57.571 END TEST default_setup 00:04:57.571 ************************************ 00:04:57.571 21:12:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.571 21:12:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.571 21:12:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.571 21:12:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.571 21:12:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.571 21:12:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.571 00:04:57.571 real 0m1.008s 00:04:57.571 user 0m0.466s 00:04:57.571 sys 0m0.462s 00:04:57.571 21:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.571 21:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.571 21:12:21 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:57.571 21:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.571 21:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.571 21:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.571 ************************************ 00:04:57.571 START TEST per_node_1G_alloc 00:04:57.571 ************************************ 00:04:57.571 21:12:21 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:57.571 21:12:21 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:57.571 21:12:21 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:57.571 21:12:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:57.571 21:12:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:57.571 21:12:21 -- setup/hugepages.sh@51 -- # shift 00:04:57.571 21:12:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:57.571 21:12:21 -- setup/hugepages.sh@52 -- # local node_ids 00:04:57.571 21:12:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.571 21:12:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:57.571 21:12:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:57.571 21:12:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:57.571 21:12:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.571 21:12:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:57.571 21:12:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:57.571 21:12:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.571 21:12:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.571 21:12:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:57.571 21:12:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:57.571 21:12:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:57.571 21:12:21 -- setup/hugepages.sh@73 -- # return 0 00:04:57.571 21:12:21 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:57.571 21:12:21 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:57.571 21:12:21 -- setup/hugepages.sh@146 -- # setup output 00:04:57.571 21:12:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.571 21:12:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.833 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.833 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.833 21:12:21 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:57.833 21:12:21 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:57.833 21:12:21 -- setup/hugepages.sh@89 -- # local node 00:04:57.833 21:12:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.833 21:12:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.833 21:12:21 -- setup/hugepages.sh@92 -- # local surp 00:04:57.833 21:12:21 -- setup/hugepages.sh@93 -- # local resv 00:04:57.833 21:12:21 -- setup/hugepages.sh@94 -- # local anon 00:04:57.833 21:12:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.833 21:12:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.833 21:12:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.833 21:12:21 -- setup/common.sh@18 -- # local node= 00:04:57.833 21:12:21 -- setup/common.sh@19 -- # local var val 00:04:57.833 21:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.833 21:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.833 21:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.833 21:12:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.833 21:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.833 21:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7956516 kB' 'MemAvailable: 10457772 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457532 kB' 'Inactive: 2370324 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120832 kB' 'Mapped: 51088 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 180016 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99944 kB' 'KernelStack: 6776 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.833 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.833 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.834 21:12:21 -- setup/common.sh@33 -- # echo 0 00:04:57.834 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:57.834 21:12:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.834 21:12:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.834 21:12:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.834 21:12:21 -- setup/common.sh@18 -- # local node= 00:04:57.834 21:12:21 -- setup/common.sh@19 -- # local var val 00:04:57.834 21:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.834 21:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.834 21:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.834 21:12:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.834 21:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.834 21:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7956516 kB' 'MemAvailable: 10457772 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457396 kB' 'Inactive: 2370324 kB' 'Active(anon): 129552 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120640 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 180040 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99968 kB' 'KernelStack: 6800 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.834 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.834 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.835 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.835 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.097 21:12:21 -- setup/common.sh@33 -- # echo 0 00:04:58.097 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:58.097 21:12:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.097 21:12:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.097 21:12:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.097 21:12:21 -- setup/common.sh@18 -- # local node= 00:04:58.097 21:12:21 -- setup/common.sh@19 -- # local var val 00:04:58.097 21:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.097 21:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.097 21:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.097 21:12:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.097 21:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.097 21:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7956968 kB' 'MemAvailable: 10458224 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457216 kB' 'Inactive: 2370324 kB' 'Active(anon): 129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120508 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 180032 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6816 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.097 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.097 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.098 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.098 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.099 21:12:21 -- setup/common.sh@33 -- # echo 0 00:04:58.099 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:58.099 21:12:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.099 nr_hugepages=512 00:04:58.099 resv_hugepages=0 00:04:58.099 surplus_hugepages=0 00:04:58.099 anon_hugepages=0 00:04:58.099 21:12:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:58.099 21:12:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.099 21:12:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.099 21:12:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.099 21:12:21 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:58.099 21:12:21 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:58.099 21:12:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.099 21:12:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.099 21:12:21 -- setup/common.sh@18 -- # local node= 00:04:58.099 21:12:21 -- setup/common.sh@19 -- # local var val 00:04:58.099 21:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.099 21:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.099 21:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.099 21:12:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.099 21:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.099 21:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7956968 kB' 'MemAvailable: 10458224 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457268 kB' 'Inactive: 2370324 kB' 'Active(anon): 129424 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120504 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 180024 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99952 kB' 'KernelStack: 6816 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.099 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.099 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.100 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.100 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.100 21:12:21 -- setup/common.sh@33 -- # echo 512 00:04:58.100 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:58.100 21:12:21 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:58.100 21:12:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.100 21:12:21 -- setup/hugepages.sh@27 -- # local node 00:04:58.100 21:12:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.100 21:12:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.100 21:12:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.100 21:12:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.100 21:12:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.100 21:12:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.100 21:12:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.100 21:12:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.100 21:12:21 -- setup/common.sh@18 -- # local node=0 00:04:58.101 21:12:21 -- setup/common.sh@19 -- # local var val 00:04:58.101 21:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.101 21:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.101 21:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.101 21:12:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.101 21:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.101 21:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7956968 kB' 'MemUsed: 4282144 kB' 'SwapCached: 0 kB' 'Active: 457404 kB' 'Inactive: 2370324 kB' 'Active(anon): 129560 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2708656 kB' 'Mapped: 50972 kB' 'AnonPages: 120636 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80072 kB' 'Slab: 180008 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.101 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.101 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # continue 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.102 21:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.102 21:12:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.102 21:12:21 -- setup/common.sh@33 -- # echo 0 00:04:58.102 21:12:21 -- setup/common.sh@33 -- # return 0 00:04:58.102 21:12:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.102 21:12:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.102 21:12:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.102 node0=512 expecting 512 00:04:58.102 21:12:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:58.102 21:12:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:58.102 00:04:58.102 real 0m0.557s 00:04:58.102 user 0m0.270s 00:04:58.102 sys 0m0.298s 00:04:58.102 21:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.102 21:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.102 ************************************ 00:04:58.102 END TEST per_node_1G_alloc 00:04:58.102 ************************************ 00:04:58.102 21:12:21 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:58.102 21:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.102 21:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.102 21:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.102 ************************************ 00:04:58.102 START TEST even_2G_alloc 00:04:58.102 ************************************ 00:04:58.102 21:12:21 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:58.102 21:12:21 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:58.102 21:12:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.102 21:12:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.102 21:12:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.102 21:12:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.102 21:12:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.102 21:12:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.102 21:12:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.102 21:12:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.102 21:12:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.102 21:12:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:58.102 21:12:21 -- setup/hugepages.sh@83 -- # : 0 00:04:58.102 21:12:21 -- setup/hugepages.sh@84 -- # : 0 00:04:58.102 21:12:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.102 21:12:21 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:58.102 21:12:21 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:58.102 21:12:21 -- setup/hugepages.sh@153 -- # setup output 00:04:58.102 21:12:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.102 21:12:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.361 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.361 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.361 21:12:22 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:58.361 21:12:22 -- setup/hugepages.sh@89 -- # local node 00:04:58.361 21:12:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.361 21:12:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.361 21:12:22 -- setup/hugepages.sh@92 -- # local surp 00:04:58.361 21:12:22 -- setup/hugepages.sh@93 -- # local resv 00:04:58.361 21:12:22 -- setup/hugepages.sh@94 -- # local anon 00:04:58.361 21:12:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.361 21:12:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.361 21:12:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.361 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:58.361 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:58.361 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.361 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.361 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.622 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.622 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.622 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6904352 kB' 'MemAvailable: 9405608 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457560 kB' 'Inactive: 2370324 kB' 'Active(anon): 129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 51056 kB' 'Shmem: 10488 kB' 'KReclaimable: 80072 kB' 'Slab: 180008 kB' 'SReclaimable: 80072 kB' 'SUnreclaim: 99936 kB' 'KernelStack: 6784 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.622 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.622 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.623 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:58.623 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:58.623 21:12:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.623 21:12:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.623 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.623 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:58.623 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:58.623 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.623 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.623 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.623 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.623 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.623 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6904352 kB' 'MemAvailable: 9405604 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457108 kB' 'Inactive: 2370324 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 179996 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99932 kB' 'KernelStack: 6800 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.623 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.623 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.624 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:58.624 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:58.624 21:12:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.624 21:12:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.624 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.624 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:58.624 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:58.624 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.624 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.624 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.624 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.624 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.624 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6910456 kB' 'MemAvailable: 9411708 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457172 kB' 'Inactive: 2370324 kB' 'Active(anon): 129328 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120540 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 179996 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99932 kB' 'KernelStack: 6816 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.624 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.624 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.625 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.625 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.626 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:58.626 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:58.626 21:12:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.626 nr_hugepages=1024 00:04:58.626 21:12:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.626 resv_hugepages=0 00:04:58.626 21:12:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.626 surplus_hugepages=0 00:04:58.626 21:12:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.626 anon_hugepages=0 00:04:58.626 21:12:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.626 21:12:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.626 21:12:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.626 21:12:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.626 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.626 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:58.626 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:58.626 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.626 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.626 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.626 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.626 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.626 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6910456 kB' 'MemAvailable: 9411708 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 456916 kB' 'Inactive: 2370324 kB' 'Active(anon): 129072 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120276 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 179996 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99932 kB' 'KernelStack: 6816 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.626 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.626 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.627 21:12:22 -- setup/common.sh@33 -- # echo 1024 00:04:58.627 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:58.627 21:12:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.627 21:12:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.627 21:12:22 -- setup/hugepages.sh@27 -- # local node 00:04:58.627 21:12:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.627 21:12:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.627 21:12:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.627 21:12:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.627 21:12:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.627 21:12:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.627 21:12:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.627 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.627 21:12:22 -- setup/common.sh@18 -- # local node=0 00:04:58.627 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:58.627 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.627 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.627 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.627 21:12:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.627 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.627 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6910456 kB' 'MemUsed: 5328656 kB' 'SwapCached: 0 kB' 'Active: 457228 kB' 'Inactive: 2370324 kB' 'Active(anon): 129384 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2708656 kB' 'Mapped: 50972 kB' 'AnonPages: 120544 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80064 kB' 'Slab: 179992 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.627 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.627 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # continue 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.628 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.628 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.628 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:58.628 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:58.628 21:12:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.628 21:12:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.628 21:12:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.628 node0=1024 expecting 1024 00:04:58.628 21:12:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.628 21:12:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.628 00:04:58.628 real 0m0.564s 00:04:58.628 user 0m0.281s 00:04:58.628 sys 0m0.288s 00:04:58.628 21:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.628 21:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.628 ************************************ 00:04:58.628 END TEST even_2G_alloc 00:04:58.628 ************************************ 00:04:58.628 21:12:22 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:58.628 21:12:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.628 21:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.628 21:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.628 ************************************ 00:04:58.628 START TEST odd_alloc 00:04:58.628 ************************************ 00:04:58.628 21:12:22 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:58.628 21:12:22 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:58.628 21:12:22 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:58.628 21:12:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:58.628 21:12:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.628 21:12:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.628 21:12:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.628 21:12:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:58.628 21:12:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.628 21:12:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.628 21:12:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.628 21:12:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:58.628 21:12:22 -- setup/hugepages.sh@83 -- # : 0 00:04:58.628 21:12:22 -- setup/hugepages.sh@84 -- # : 0 00:04:58.628 21:12:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.628 21:12:22 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:58.628 21:12:22 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:58.629 21:12:22 -- setup/hugepages.sh@160 -- # setup output 00:04:58.629 21:12:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.629 21:12:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.148 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.148 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.148 21:12:22 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:59.148 21:12:22 -- setup/hugepages.sh@89 -- # local node 00:04:59.148 21:12:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.148 21:12:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.148 21:12:22 -- setup/hugepages.sh@92 -- # local surp 00:04:59.148 21:12:22 -- setup/hugepages.sh@93 -- # local resv 00:04:59.148 21:12:22 -- setup/hugepages.sh@94 -- # local anon 00:04:59.148 21:12:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.148 21:12:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.148 21:12:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.148 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:59.148 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:59.148 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.148 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.148 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.148 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.148 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.148 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6962472 kB' 'MemAvailable: 9463724 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457388 kB' 'Inactive: 2370324 kB' 'Active(anon): 129544 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120656 kB' 'Mapped: 51092 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180028 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99964 kB' 'KernelStack: 6776 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.148 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.149 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:59.149 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:59.149 21:12:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.149 21:12:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.149 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.149 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:59.149 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:59.149 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.149 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.149 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.149 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.149 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.149 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6962472 kB' 'MemAvailable: 9463724 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457388 kB' 'Inactive: 2370324 kB' 'Active(anon): 129544 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120728 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180032 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99968 kB' 'KernelStack: 6848 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.150 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:59.150 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:59.150 21:12:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.150 21:12:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.150 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.150 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:59.150 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:59.150 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.150 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.150 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.150 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.150 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.150 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6962472 kB' 'MemAvailable: 9463724 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 456964 kB' 'Inactive: 2370324 kB' 'Active(anon): 129120 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120260 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180016 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99952 kB' 'KernelStack: 6816 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.151 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.152 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:59.152 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:59.152 21:12:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.152 nr_hugepages=1025 00:04:59.152 21:12:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:59.152 resv_hugepages=0 00:04:59.152 21:12:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.152 surplus_hugepages=0 00:04:59.152 21:12:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.152 anon_hugepages=0 00:04:59.152 21:12:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.152 21:12:22 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.152 21:12:22 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:59.152 21:12:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.152 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.152 21:12:22 -- setup/common.sh@18 -- # local node= 00:04:59.152 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:59.152 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.152 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.152 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.152 21:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.152 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.152 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6962472 kB' 'MemAvailable: 9463724 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457200 kB' 'Inactive: 2370324 kB' 'Active(anon): 129356 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120496 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180012 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99948 kB' 'KernelStack: 6816 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 21:12:22 -- setup/common.sh@33 -- # echo 1025 00:04:59.153 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:59.153 21:12:22 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.153 21:12:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.153 21:12:22 -- setup/hugepages.sh@27 -- # local node 00:04:59.153 21:12:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.153 21:12:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:59.153 21:12:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.153 21:12:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.153 21:12:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.153 21:12:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.153 21:12:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.153 21:12:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.153 21:12:22 -- setup/common.sh@18 -- # local node=0 00:04:59.153 21:12:22 -- setup/common.sh@19 -- # local var val 00:04:59.153 21:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.153 21:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.153 21:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.153 21:12:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.153 21:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.153 21:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6962472 kB' 'MemUsed: 5276640 kB' 'SwapCached: 0 kB' 'Active: 457144 kB' 'Inactive: 2370324 kB' 'Active(anon): 129300 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2708656 kB' 'Mapped: 50972 kB' 'AnonPages: 120416 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80064 kB' 'Slab: 180012 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # continue 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 21:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 21:12:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 21:12:22 -- setup/common.sh@33 -- # echo 0 00:04:59.154 21:12:22 -- setup/common.sh@33 -- # return 0 00:04:59.154 21:12:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.154 21:12:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.154 21:12:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.154 21:12:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.154 node0=1025 expecting 1025 00:04:59.154 21:12:22 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:59.154 21:12:22 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:59.154 00:04:59.154 real 0m0.540s 00:04:59.154 user 0m0.288s 00:04:59.154 sys 0m0.285s 00:04:59.154 21:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.154 21:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:59.154 ************************************ 00:04:59.154 END TEST odd_alloc 00:04:59.154 ************************************ 00:04:59.412 21:12:22 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:59.412 21:12:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.412 21:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.412 21:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 ************************************ 00:04:59.412 START TEST custom_alloc 00:04:59.412 ************************************ 00:04:59.412 21:12:22 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:59.412 21:12:22 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:59.413 21:12:22 -- setup/hugepages.sh@169 -- # local node 00:04:59.413 21:12:22 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:59.413 21:12:22 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:59.413 21:12:22 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:59.413 21:12:22 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:59.413 21:12:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:59.413 21:12:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:59.413 21:12:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.413 21:12:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.413 21:12:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.413 21:12:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.413 21:12:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.413 21:12:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.413 21:12:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.413 21:12:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.413 21:12:22 -- setup/hugepages.sh@83 -- # : 0 00:04:59.413 21:12:22 -- setup/hugepages.sh@84 -- # : 0 00:04:59.413 21:12:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:59.413 21:12:22 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:59.413 21:12:22 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:59.413 21:12:22 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:59.413 21:12:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.413 21:12:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.413 21:12:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.413 21:12:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.413 21:12:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.413 21:12:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.413 21:12:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:59.413 21:12:22 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:59.413 21:12:22 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:59.413 21:12:22 -- setup/hugepages.sh@78 -- # return 0 00:04:59.413 21:12:22 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:59.413 21:12:22 -- setup/hugepages.sh@187 -- # setup output 00:04:59.413 21:12:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.413 21:12:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.673 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.673 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.673 21:12:23 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:59.673 21:12:23 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:59.673 21:12:23 -- setup/hugepages.sh@89 -- # local node 00:04:59.673 21:12:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.673 21:12:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.673 21:12:23 -- setup/hugepages.sh@92 -- # local surp 00:04:59.673 21:12:23 -- setup/hugepages.sh@93 -- # local resv 00:04:59.673 21:12:23 -- setup/hugepages.sh@94 -- # local anon 00:04:59.673 21:12:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.673 21:12:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.673 21:12:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.673 21:12:23 -- setup/common.sh@18 -- # local node= 00:04:59.673 21:12:23 -- setup/common.sh@19 -- # local var val 00:04:59.673 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.673 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.673 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.673 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.673 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.673 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8006756 kB' 'MemAvailable: 10508008 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457632 kB' 'Inactive: 2370324 kB' 'Active(anon): 129788 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120936 kB' 'Mapped: 51088 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180040 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99976 kB' 'KernelStack: 6840 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.673 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.673 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.673 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.673 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.674 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.674 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.675 21:12:23 -- setup/common.sh@33 -- # echo 0 00:04:59.675 21:12:23 -- setup/common.sh@33 -- # return 0 00:04:59.675 21:12:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.675 21:12:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.675 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.675 21:12:23 -- setup/common.sh@18 -- # local node= 00:04:59.675 21:12:23 -- setup/common.sh@19 -- # local var val 00:04:59.675 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.675 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.675 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.675 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.675 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.675 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8006756 kB' 'MemAvailable: 10508008 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457228 kB' 'Inactive: 2370324 kB' 'Active(anon): 129384 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120496 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180056 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99992 kB' 'KernelStack: 6816 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.675 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.675 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.676 21:12:23 -- setup/common.sh@33 -- # echo 0 00:04:59.676 21:12:23 -- setup/common.sh@33 -- # return 0 00:04:59.676 21:12:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.676 21:12:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.676 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.676 21:12:23 -- setup/common.sh@18 -- # local node= 00:04:59.676 21:12:23 -- setup/common.sh@19 -- # local var val 00:04:59.676 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.676 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.676 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.676 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.676 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.676 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8006756 kB' 'MemAvailable: 10508008 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457240 kB' 'Inactive: 2370324 kB' 'Active(anon): 129396 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120500 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180052 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99988 kB' 'KernelStack: 6816 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.676 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.676 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.677 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.677 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.677 21:12:23 -- setup/common.sh@33 -- # echo 0 00:04:59.677 21:12:23 -- setup/common.sh@33 -- # return 0 00:04:59.677 21:12:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.677 nr_hugepages=512 00:04:59.677 21:12:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:59.677 resv_hugepages=0 00:04:59.677 21:12:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.677 surplus_hugepages=0 00:04:59.677 21:12:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.677 anon_hugepages=0 00:04:59.677 21:12:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.677 21:12:23 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:59.678 21:12:23 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:59.678 21:12:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.678 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.678 21:12:23 -- setup/common.sh@18 -- # local node= 00:04:59.678 21:12:23 -- setup/common.sh@19 -- # local var val 00:04:59.678 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.678 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.678 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.678 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.678 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.678 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8006756 kB' 'MemAvailable: 10508008 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 456968 kB' 'Inactive: 2370324 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120492 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180052 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99988 kB' 'KernelStack: 6816 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.678 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.678 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.679 21:12:23 -- setup/common.sh@33 -- # echo 512 00:04:59.679 21:12:23 -- setup/common.sh@33 -- # return 0 00:04:59.679 21:12:23 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:59.679 21:12:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.679 21:12:23 -- setup/hugepages.sh@27 -- # local node 00:04:59.679 21:12:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.679 21:12:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.679 21:12:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.679 21:12:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.679 21:12:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.679 21:12:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.679 21:12:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.679 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.679 21:12:23 -- setup/common.sh@18 -- # local node=0 00:04:59.679 21:12:23 -- setup/common.sh@19 -- # local var val 00:04:59.679 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.679 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.679 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.679 21:12:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.679 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.679 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8006756 kB' 'MemUsed: 4232356 kB' 'SwapCached: 0 kB' 'Active: 456992 kB' 'Inactive: 2370324 kB' 'Active(anon): 129148 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708656 kB' 'Mapped: 50972 kB' 'AnonPages: 120292 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80064 kB' 'Slab: 180052 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.679 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.679 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.680 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.680 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.680 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.680 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.680 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.680 21:12:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # continue 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.938 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.938 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.938 21:12:23 -- setup/common.sh@33 -- # echo 0 00:04:59.938 21:12:23 -- setup/common.sh@33 -- # return 0 00:04:59.938 21:12:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.938 21:12:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.938 21:12:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.938 21:12:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.938 node0=512 expecting 512 00:04:59.938 21:12:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.939 21:12:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:59.939 00:04:59.939 real 0m0.512s 00:04:59.939 user 0m0.269s 00:04:59.939 sys 0m0.277s 00:04:59.939 21:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.939 21:12:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.939 ************************************ 00:04:59.939 END TEST custom_alloc 00:04:59.939 ************************************ 00:04:59.939 21:12:23 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:59.939 21:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.939 21:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.939 21:12:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.939 ************************************ 00:04:59.939 START TEST no_shrink_alloc 00:04:59.939 ************************************ 00:04:59.939 21:12:23 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:59.939 21:12:23 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:59.939 21:12:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.939 21:12:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.939 21:12:23 -- setup/hugepages.sh@51 -- # shift 00:04:59.939 21:12:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.939 21:12:23 -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.939 21:12:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.939 21:12:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.939 21:12:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.939 21:12:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.939 21:12:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.939 21:12:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.939 21:12:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.939 21:12:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.939 21:12:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.939 21:12:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.939 21:12:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.939 21:12:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.939 21:12:23 -- setup/hugepages.sh@73 -- # return 0 00:04:59.939 21:12:23 -- setup/hugepages.sh@198 -- # setup output 00:04:59.939 21:12:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.939 21:12:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.199 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.199 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.199 21:12:23 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:00.199 21:12:23 -- setup/hugepages.sh@89 -- # local node 00:05:00.199 21:12:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.199 21:12:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.199 21:12:23 -- setup/hugepages.sh@92 -- # local surp 00:05:00.199 21:12:23 -- setup/hugepages.sh@93 -- # local resv 00:05:00.199 21:12:23 -- setup/hugepages.sh@94 -- # local anon 00:05:00.199 21:12:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.199 21:12:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.199 21:12:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.199 21:12:23 -- setup/common.sh@18 -- # local node= 00:05:00.199 21:12:23 -- setup/common.sh@19 -- # local var val 00:05:00.199 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.199 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.199 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.199 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.199 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.199 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6966500 kB' 'MemAvailable: 9467752 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457356 kB' 'Inactive: 2370324 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120880 kB' 'Mapped: 51080 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180028 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99964 kB' 'KernelStack: 6876 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 21:12:23 -- setup/common.sh@33 -- # echo 0 00:05:00.200 21:12:23 -- setup/common.sh@33 -- # return 0 00:05:00.200 21:12:23 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.200 21:12:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.200 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.200 21:12:23 -- setup/common.sh@18 -- # local node= 00:05:00.200 21:12:23 -- setup/common.sh@19 -- # local var val 00:05:00.200 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.200 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.200 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.200 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.200 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.200 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6966500 kB' 'MemAvailable: 9467752 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457216 kB' 'Inactive: 2370324 kB' 'Active(anon): 129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120740 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180024 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6812 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.200 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.201 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 21:12:23 -- setup/common.sh@33 -- # echo 0 00:05:00.202 21:12:23 -- setup/common.sh@33 -- # return 0 00:05:00.202 21:12:23 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.202 21:12:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.202 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.202 21:12:23 -- setup/common.sh@18 -- # local node= 00:05:00.202 21:12:23 -- setup/common.sh@19 -- # local var val 00:05:00.202 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.202 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.202 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.202 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.202 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.202 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6966500 kB' 'MemAvailable: 9467752 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457208 kB' 'Inactive: 2370324 kB' 'Active(anon): 129364 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120504 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180024 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6796 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.202 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.203 21:12:23 -- setup/common.sh@33 -- # echo 0 00:05:00.203 21:12:23 -- setup/common.sh@33 -- # return 0 00:05:00.203 21:12:23 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.203 nr_hugepages=1024 00:05:00.203 21:12:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.203 resv_hugepages=0 00:05:00.203 21:12:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.203 surplus_hugepages=0 00:05:00.203 21:12:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.203 anon_hugepages=0 00:05:00.203 21:12:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.203 21:12:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.203 21:12:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.203 21:12:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.203 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.463 21:12:23 -- setup/common.sh@18 -- # local node= 00:05:00.463 21:12:23 -- setup/common.sh@19 -- # local var val 00:05:00.463 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.463 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.463 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.463 21:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.463 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.463 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6966500 kB' 'MemAvailable: 9467752 kB' 'Buffers: 2684 kB' 'Cached: 2705972 kB' 'SwapCached: 0 kB' 'Active: 457036 kB' 'Inactive: 2370324 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120360 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180020 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99956 kB' 'KernelStack: 6812 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.463 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.463 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.464 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.464 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.464 21:12:23 -- setup/common.sh@33 -- # echo 1024 00:05:00.464 21:12:23 -- setup/common.sh@33 -- # return 0 00:05:00.464 21:12:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.464 21:12:23 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.464 21:12:23 -- setup/hugepages.sh@27 -- # local node 00:05:00.464 21:12:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.464 21:12:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.464 21:12:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.464 21:12:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.464 21:12:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.464 21:12:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.464 21:12:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.464 21:12:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.464 21:12:23 -- setup/common.sh@18 -- # local node=0 00:05:00.465 21:12:23 -- setup/common.sh@19 -- # local var val 00:05:00.465 21:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.465 21:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.465 21:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.465 21:12:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.465 21:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.465 21:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6966500 kB' 'MemUsed: 5272612 kB' 'SwapCached: 0 kB' 'Active: 457028 kB' 'Inactive: 2370324 kB' 'Active(anon): 129184 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708656 kB' 'Mapped: 50972 kB' 'AnonPages: 120352 kB' 'Shmem: 10488 kB' 'KernelStack: 6812 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80064 kB' 'Slab: 180044 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.465 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.465 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # continue 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 21:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 21:12:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.466 21:12:23 -- setup/common.sh@33 -- # echo 0 00:05:00.466 21:12:23 -- setup/common.sh@33 -- # return 0 00:05:00.466 21:12:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.466 21:12:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.466 21:12:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.466 21:12:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.466 node0=1024 expecting 1024 00:05:00.466 21:12:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.466 21:12:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.466 21:12:23 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:00.466 21:12:23 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:00.466 21:12:23 -- setup/hugepages.sh@202 -- # setup output 00:05:00.466 21:12:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.466 21:12:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.727 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.727 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.727 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:00.727 21:12:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:00.727 21:12:24 -- setup/hugepages.sh@89 -- # local node 00:05:00.727 21:12:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.727 21:12:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.727 21:12:24 -- setup/hugepages.sh@92 -- # local surp 00:05:00.727 21:12:24 -- setup/hugepages.sh@93 -- # local resv 00:05:00.727 21:12:24 -- setup/hugepages.sh@94 -- # local anon 00:05:00.727 21:12:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.728 21:12:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.728 21:12:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.728 21:12:24 -- setup/common.sh@18 -- # local node= 00:05:00.728 21:12:24 -- setup/common.sh@19 -- # local var val 00:05:00.728 21:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.728 21:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.728 21:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.728 21:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.728 21:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.728 21:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6964312 kB' 'MemAvailable: 9465568 kB' 'Buffers: 2684 kB' 'Cached: 2705976 kB' 'SwapCached: 0 kB' 'Active: 457412 kB' 'Inactive: 2370328 kB' 'Active(anon): 129568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120728 kB' 'Mapped: 51152 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180032 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99968 kB' 'KernelStack: 6840 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.728 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.728 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.729 21:12:24 -- setup/common.sh@33 -- # echo 0 00:05:00.729 21:12:24 -- setup/common.sh@33 -- # return 0 00:05:00.729 21:12:24 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.729 21:12:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.729 21:12:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.729 21:12:24 -- setup/common.sh@18 -- # local node= 00:05:00.729 21:12:24 -- setup/common.sh@19 -- # local var val 00:05:00.729 21:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.729 21:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.729 21:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.729 21:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.729 21:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.729 21:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6964312 kB' 'MemAvailable: 9465568 kB' 'Buffers: 2684 kB' 'Cached: 2705976 kB' 'SwapCached: 0 kB' 'Active: 457092 kB' 'Inactive: 2370328 kB' 'Active(anon): 129248 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120388 kB' 'Mapped: 51100 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180024 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6792 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.729 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.729 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.730 21:12:24 -- setup/common.sh@33 -- # echo 0 00:05:00.730 21:12:24 -- setup/common.sh@33 -- # return 0 00:05:00.730 21:12:24 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.730 21:12:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.730 21:12:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.730 21:12:24 -- setup/common.sh@18 -- # local node= 00:05:00.730 21:12:24 -- setup/common.sh@19 -- # local var val 00:05:00.730 21:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.730 21:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.730 21:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.730 21:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.730 21:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.730 21:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6964312 kB' 'MemAvailable: 9465568 kB' 'Buffers: 2684 kB' 'Cached: 2705976 kB' 'SwapCached: 0 kB' 'Active: 457092 kB' 'Inactive: 2370328 kB' 'Active(anon): 129248 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120616 kB' 'Mapped: 51100 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180020 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99956 kB' 'KernelStack: 6760 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.730 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.730 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.731 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.731 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.732 21:12:24 -- setup/common.sh@33 -- # echo 0 00:05:00.732 21:12:24 -- setup/common.sh@33 -- # return 0 00:05:00.732 21:12:24 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.732 nr_hugepages=1024 00:05:00.732 21:12:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.732 resv_hugepages=0 00:05:00.732 21:12:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.732 surplus_hugepages=0 00:05:00.732 21:12:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.732 anon_hugepages=0 00:05:00.732 21:12:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.732 21:12:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.732 21:12:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.732 21:12:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.732 21:12:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.732 21:12:24 -- setup/common.sh@18 -- # local node= 00:05:00.732 21:12:24 -- setup/common.sh@19 -- # local var val 00:05:00.732 21:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.732 21:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.732 21:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.732 21:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.732 21:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.732 21:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6964312 kB' 'MemAvailable: 9465568 kB' 'Buffers: 2684 kB' 'Cached: 2705976 kB' 'SwapCached: 0 kB' 'Active: 456968 kB' 'Inactive: 2370328 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120200 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80064 kB' 'Slab: 180024 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99960 kB' 'KernelStack: 6768 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.732 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.732 21:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.733 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.733 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.733 21:12:24 -- setup/common.sh@33 -- # echo 1024 00:05:00.733 21:12:24 -- setup/common.sh@33 -- # return 0 00:05:00.733 21:12:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.733 21:12:24 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.733 21:12:24 -- setup/hugepages.sh@27 -- # local node 00:05:00.992 21:12:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.992 21:12:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.992 21:12:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.992 21:12:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.992 21:12:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.992 21:12:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.992 21:12:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.992 21:12:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.992 21:12:24 -- setup/common.sh@18 -- # local node=0 00:05:00.992 21:12:24 -- setup/common.sh@19 -- # local var val 00:05:00.992 21:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.992 21:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.992 21:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.992 21:12:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.992 21:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.992 21:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.992 21:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6964312 kB' 'MemUsed: 5274800 kB' 'SwapCached: 0 kB' 'Active: 457228 kB' 'Inactive: 2370328 kB' 'Active(anon): 129384 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2370328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708660 kB' 'Mapped: 50972 kB' 'AnonPages: 120460 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80064 kB' 'Slab: 180024 kB' 'SReclaimable: 80064 kB' 'SUnreclaim: 99960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.992 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.992 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # continue 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.993 21:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.993 21:12:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.993 21:12:24 -- setup/common.sh@33 -- # echo 0 00:05:00.993 21:12:24 -- setup/common.sh@33 -- # return 0 00:05:00.993 21:12:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.993 21:12:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.993 21:12:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.993 21:12:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.993 node0=1024 expecting 1024 00:05:00.993 21:12:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.993 21:12:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.993 00:05:00.993 real 0m1.017s 00:05:00.993 user 0m0.513s 00:05:00.993 sys 0m0.572s 00:05:00.993 21:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.993 21:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:00.993 ************************************ 00:05:00.993 END TEST no_shrink_alloc 00:05:00.993 ************************************ 00:05:00.993 21:12:24 -- setup/hugepages.sh@217 -- # clear_hp 00:05:00.993 21:12:24 -- setup/hugepages.sh@37 -- # local node hp 00:05:00.993 21:12:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.993 21:12:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.993 21:12:24 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.993 21:12:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.993 21:12:24 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.993 21:12:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.993 21:12:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.993 00:05:00.993 real 0m4.715s 00:05:00.993 user 0m2.323s 00:05:00.993 sys 0m2.454s 00:05:00.993 21:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.993 21:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:00.993 ************************************ 00:05:00.993 END TEST hugepages 00:05:00.993 ************************************ 00:05:00.993 21:12:24 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:00.994 21:12:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.994 21:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.994 21:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:00.994 ************************************ 00:05:00.994 START TEST driver 00:05:00.994 ************************************ 00:05:00.994 21:12:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:00.994 * Looking for test storage... 00:05:00.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.994 21:12:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.994 21:12:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.994 21:12:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:01.252 21:12:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:01.252 21:12:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:01.252 21:12:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:01.252 21:12:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:01.252 21:12:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:01.252 21:12:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:01.252 21:12:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.252 21:12:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:01.252 21:12:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:01.252 21:12:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:01.252 21:12:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:01.252 21:12:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:01.252 21:12:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:01.252 21:12:24 -- scripts/common.sh@344 -- # : 1 00:05:01.252 21:12:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:01.252 21:12:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.252 21:12:24 -- scripts/common.sh@364 -- # decimal 1 00:05:01.252 21:12:24 -- scripts/common.sh@352 -- # local d=1 00:05:01.252 21:12:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.252 21:12:24 -- scripts/common.sh@354 -- # echo 1 00:05:01.252 21:12:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:01.252 21:12:24 -- scripts/common.sh@365 -- # decimal 2 00:05:01.252 21:12:24 -- scripts/common.sh@352 -- # local d=2 00:05:01.252 21:12:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.252 21:12:24 -- scripts/common.sh@354 -- # echo 2 00:05:01.252 21:12:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:01.252 21:12:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:01.252 21:12:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:01.252 21:12:24 -- scripts/common.sh@367 -- # return 0 00:05:01.252 21:12:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.252 21:12:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:01.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.252 --rc genhtml_branch_coverage=1 00:05:01.252 --rc genhtml_function_coverage=1 00:05:01.252 --rc genhtml_legend=1 00:05:01.252 --rc geninfo_all_blocks=1 00:05:01.253 --rc geninfo_unexecuted_blocks=1 00:05:01.253 00:05:01.253 ' 00:05:01.253 21:12:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:01.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.253 --rc genhtml_branch_coverage=1 00:05:01.253 --rc genhtml_function_coverage=1 00:05:01.253 --rc genhtml_legend=1 00:05:01.253 --rc geninfo_all_blocks=1 00:05:01.253 --rc geninfo_unexecuted_blocks=1 00:05:01.253 00:05:01.253 ' 00:05:01.253 21:12:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:01.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.253 --rc genhtml_branch_coverage=1 00:05:01.253 --rc genhtml_function_coverage=1 00:05:01.253 --rc genhtml_legend=1 00:05:01.253 --rc geninfo_all_blocks=1 00:05:01.253 --rc geninfo_unexecuted_blocks=1 00:05:01.253 00:05:01.253 ' 00:05:01.253 21:12:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:01.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.253 --rc genhtml_branch_coverage=1 00:05:01.253 --rc genhtml_function_coverage=1 00:05:01.253 --rc genhtml_legend=1 00:05:01.253 --rc geninfo_all_blocks=1 00:05:01.253 --rc geninfo_unexecuted_blocks=1 00:05:01.253 00:05:01.253 ' 00:05:01.253 21:12:24 -- setup/driver.sh@68 -- # setup reset 00:05:01.253 21:12:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.253 21:12:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.826 21:12:25 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:01.826 21:12:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.826 21:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.826 21:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:01.826 ************************************ 00:05:01.826 START TEST guess_driver 00:05:01.826 ************************************ 00:05:01.826 21:12:25 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:01.826 21:12:25 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:01.826 21:12:25 -- setup/driver.sh@47 -- # local fail=0 00:05:01.826 21:12:25 -- setup/driver.sh@49 -- # pick_driver 00:05:01.827 21:12:25 -- setup/driver.sh@36 -- # vfio 00:05:01.827 21:12:25 -- setup/driver.sh@21 -- # local iommu_grups 00:05:01.827 21:12:25 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:01.827 21:12:25 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:01.827 21:12:25 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:01.827 21:12:25 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:01.827 21:12:25 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:01.827 21:12:25 -- setup/driver.sh@32 -- # return 1 00:05:01.827 21:12:25 -- setup/driver.sh@38 -- # uio 00:05:01.827 21:12:25 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:01.827 21:12:25 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:01.827 21:12:25 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:01.827 21:12:25 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:01.827 21:12:25 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:01.827 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:01.827 21:12:25 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:01.827 Looking for driver=uio_pci_generic 00:05:01.827 21:12:25 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:01.827 21:12:25 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:01.827 21:12:25 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:01.827 21:12:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.827 21:12:25 -- setup/driver.sh@45 -- # setup output config 00:05:01.827 21:12:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.827 21:12:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.395 21:12:25 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:02.395 21:12:25 -- setup/driver.sh@58 -- # continue 00:05:02.395 21:12:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.395 21:12:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.395 21:12:26 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:02.395 21:12:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.654 21:12:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.654 21:12:26 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:02.654 21:12:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.654 21:12:26 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:02.654 21:12:26 -- setup/driver.sh@65 -- # setup reset 00:05:02.654 21:12:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.654 21:12:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.253 00:05:03.253 real 0m1.408s 00:05:03.253 user 0m0.576s 00:05:03.253 sys 0m0.830s 00:05:03.253 21:12:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.253 21:12:26 -- common/autotest_common.sh@10 -- # set +x 00:05:03.253 ************************************ 00:05:03.253 END TEST guess_driver 00:05:03.253 ************************************ 00:05:03.253 00:05:03.253 real 0m2.191s 00:05:03.253 user 0m0.888s 00:05:03.253 sys 0m1.356s 00:05:03.253 21:12:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.253 21:12:26 -- common/autotest_common.sh@10 -- # set +x 00:05:03.253 ************************************ 00:05:03.253 END TEST driver 00:05:03.253 ************************************ 00:05:03.253 21:12:26 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:03.253 21:12:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.253 21:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.253 21:12:26 -- common/autotest_common.sh@10 -- # set +x 00:05:03.253 ************************************ 00:05:03.253 START TEST devices 00:05:03.253 ************************************ 00:05:03.253 21:12:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:03.253 * Looking for test storage... 00:05:03.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.253 21:12:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.253 21:12:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.253 21:12:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.253 21:12:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.253 21:12:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.253 21:12:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.253 21:12:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.253 21:12:26 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.253 21:12:26 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.253 21:12:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.253 21:12:26 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.253 21:12:26 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.253 21:12:26 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.253 21:12:26 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.253 21:12:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.253 21:12:26 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.253 21:12:26 -- scripts/common.sh@344 -- # : 1 00:05:03.253 21:12:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.253 21:12:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.253 21:12:26 -- scripts/common.sh@364 -- # decimal 1 00:05:03.253 21:12:26 -- scripts/common.sh@352 -- # local d=1 00:05:03.253 21:12:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.525 21:12:26 -- scripts/common.sh@354 -- # echo 1 00:05:03.525 21:12:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.525 21:12:26 -- scripts/common.sh@365 -- # decimal 2 00:05:03.525 21:12:26 -- scripts/common.sh@352 -- # local d=2 00:05:03.525 21:12:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.525 21:12:26 -- scripts/common.sh@354 -- # echo 2 00:05:03.525 21:12:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.525 21:12:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.525 21:12:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.525 21:12:26 -- scripts/common.sh@367 -- # return 0 00:05:03.525 21:12:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.525 21:12:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.525 --rc genhtml_branch_coverage=1 00:05:03.525 --rc genhtml_function_coverage=1 00:05:03.525 --rc genhtml_legend=1 00:05:03.525 --rc geninfo_all_blocks=1 00:05:03.525 --rc geninfo_unexecuted_blocks=1 00:05:03.525 00:05:03.525 ' 00:05:03.525 21:12:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.525 --rc genhtml_branch_coverage=1 00:05:03.525 --rc genhtml_function_coverage=1 00:05:03.525 --rc genhtml_legend=1 00:05:03.525 --rc geninfo_all_blocks=1 00:05:03.525 --rc geninfo_unexecuted_blocks=1 00:05:03.525 00:05:03.525 ' 00:05:03.525 21:12:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.525 --rc genhtml_branch_coverage=1 00:05:03.525 --rc genhtml_function_coverage=1 00:05:03.525 --rc genhtml_legend=1 00:05:03.525 --rc geninfo_all_blocks=1 00:05:03.525 --rc geninfo_unexecuted_blocks=1 00:05:03.525 00:05:03.525 ' 00:05:03.525 21:12:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.525 --rc genhtml_branch_coverage=1 00:05:03.525 --rc genhtml_function_coverage=1 00:05:03.525 --rc genhtml_legend=1 00:05:03.525 --rc geninfo_all_blocks=1 00:05:03.525 --rc geninfo_unexecuted_blocks=1 00:05:03.525 00:05:03.525 ' 00:05:03.525 21:12:26 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:03.526 21:12:26 -- setup/devices.sh@192 -- # setup reset 00:05:03.526 21:12:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.526 21:12:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.094 21:12:27 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:04.094 21:12:27 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:04.094 21:12:27 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:04.094 21:12:27 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:04.094 21:12:27 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:04.094 21:12:27 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:04.094 21:12:27 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:04.094 21:12:27 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:04.094 21:12:27 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:04.094 21:12:27 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:04.094 21:12:27 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:04.094 21:12:27 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:04.094 21:12:27 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:04.094 21:12:27 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:04.094 21:12:27 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:04.094 21:12:27 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:04.094 21:12:27 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.094 21:12:27 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:04.094 21:12:27 -- setup/devices.sh@196 -- # blocks=() 00:05:04.094 21:12:27 -- setup/devices.sh@196 -- # declare -a blocks 00:05:04.094 21:12:27 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:04.094 21:12:27 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:04.094 21:12:27 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:04.094 21:12:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:04.094 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:04.094 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:04.094 21:12:27 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:04.094 21:12:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:04.094 21:12:27 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:04.094 21:12:27 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:04.094 21:12:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:04.094 No valid GPT data, bailing 00:05:04.094 21:12:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.094 21:12:27 -- scripts/common.sh@393 -- # pt= 00:05:04.094 21:12:27 -- scripts/common.sh@394 -- # return 1 00:05:04.094 21:12:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:04.094 21:12:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:04.094 21:12:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:04.094 21:12:27 -- setup/common.sh@80 -- # echo 5368709120 00:05:04.094 21:12:27 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:04.094 21:12:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:04.094 21:12:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:04.094 21:12:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:04.094 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:04.094 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:04.094 21:12:27 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:04.094 21:12:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:04.094 21:12:27 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:04.094 21:12:27 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:04.094 21:12:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:04.354 No valid GPT data, bailing 00:05:04.354 21:12:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.354 21:12:27 -- scripts/common.sh@393 -- # pt= 00:05:04.354 21:12:27 -- scripts/common.sh@394 -- # return 1 00:05:04.354 21:12:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:04.354 21:12:27 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:04.354 21:12:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:04.354 21:12:27 -- setup/common.sh@80 -- # echo 4294967296 00:05:04.354 21:12:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:04.354 21:12:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:04.354 21:12:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:04.354 21:12:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:04.354 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:04.354 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:04.354 21:12:27 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:04.354 21:12:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:04.354 21:12:27 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:04.354 21:12:27 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:04.354 21:12:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:04.354 No valid GPT data, bailing 00:05:04.354 21:12:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:04.354 21:12:27 -- scripts/common.sh@393 -- # pt= 00:05:04.354 21:12:27 -- scripts/common.sh@394 -- # return 1 00:05:04.354 21:12:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:04.354 21:12:27 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:04.354 21:12:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:04.354 21:12:27 -- setup/common.sh@80 -- # echo 4294967296 00:05:04.354 21:12:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:04.354 21:12:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:04.354 21:12:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:04.354 21:12:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:04.354 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:04.354 21:12:27 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:04.354 21:12:27 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:04.354 21:12:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:04.354 21:12:27 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:04.354 21:12:27 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:04.354 21:12:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:04.354 No valid GPT data, bailing 00:05:04.354 21:12:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:04.354 21:12:28 -- scripts/common.sh@393 -- # pt= 00:05:04.354 21:12:28 -- scripts/common.sh@394 -- # return 1 00:05:04.354 21:12:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:04.354 21:12:28 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:04.354 21:12:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:04.354 21:12:28 -- setup/common.sh@80 -- # echo 4294967296 00:05:04.354 21:12:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:04.354 21:12:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:04.354 21:12:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:04.354 21:12:28 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:04.354 21:12:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:04.354 21:12:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:04.354 21:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.354 21:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.354 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:05:04.354 ************************************ 00:05:04.354 START TEST nvme_mount 00:05:04.354 ************************************ 00:05:04.354 21:12:28 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:04.354 21:12:28 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:04.354 21:12:28 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:04.354 21:12:28 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.354 21:12:28 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.354 21:12:28 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:04.354 21:12:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:04.354 21:12:28 -- setup/common.sh@40 -- # local part_no=1 00:05:04.354 21:12:28 -- setup/common.sh@41 -- # local size=1073741824 00:05:04.354 21:12:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:04.354 21:12:28 -- setup/common.sh@44 -- # parts=() 00:05:04.354 21:12:28 -- setup/common.sh@44 -- # local parts 00:05:04.354 21:12:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:04.354 21:12:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.354 21:12:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:04.354 21:12:28 -- setup/common.sh@46 -- # (( part++ )) 00:05:04.354 21:12:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.354 21:12:28 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:04.354 21:12:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:04.354 21:12:28 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:05.731 Creating new GPT entries in memory. 00:05:05.732 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:05.732 other utilities. 00:05:05.732 21:12:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:05.732 21:12:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.732 21:12:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.732 21:12:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.732 21:12:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:06.668 Creating new GPT entries in memory. 00:05:06.668 The operation has completed successfully. 00:05:06.668 21:12:30 -- setup/common.sh@57 -- # (( part++ )) 00:05:06.668 21:12:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.668 21:12:30 -- setup/common.sh@62 -- # wait 63872 00:05:06.668 21:12:30 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.668 21:12:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:06.668 21:12:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.668 21:12:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:06.668 21:12:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:06.668 21:12:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.669 21:12:30 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.669 21:12:30 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:06.669 21:12:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:06.669 21:12:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.669 21:12:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.669 21:12:30 -- setup/devices.sh@53 -- # local found=0 00:05:06.669 21:12:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.669 21:12:30 -- setup/devices.sh@56 -- # : 00:05:06.669 21:12:30 -- setup/devices.sh@59 -- # local pci status 00:05:06.669 21:12:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.669 21:12:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:06.669 21:12:30 -- setup/devices.sh@47 -- # setup output config 00:05:06.669 21:12:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.669 21:12:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.669 21:12:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.669 21:12:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:06.669 21:12:30 -- setup/devices.sh@63 -- # found=1 00:05:06.669 21:12:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.669 21:12:30 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.669 21:12:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.928 21:12:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.928 21:12:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.188 21:12:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:07.188 21:12:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.188 21:12:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.188 21:12:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:07.188 21:12:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.188 21:12:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.188 21:12:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:07.188 21:12:30 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:07.188 21:12:30 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.188 21:12:30 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.188 21:12:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.188 21:12:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.188 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.188 21:12:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.188 21:12:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.447 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:07.447 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:07.447 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.447 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.447 21:12:31 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:07.447 21:12:31 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:07.447 21:12:31 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.447 21:12:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:07.447 21:12:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:07.447 21:12:31 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.447 21:12:31 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:07.447 21:12:31 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:07.447 21:12:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:07.447 21:12:31 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.447 21:12:31 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:07.447 21:12:31 -- setup/devices.sh@53 -- # local found=0 00:05:07.447 21:12:31 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.447 21:12:31 -- setup/devices.sh@56 -- # : 00:05:07.447 21:12:31 -- setup/devices.sh@59 -- # local pci status 00:05:07.447 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.447 21:12:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:07.447 21:12:31 -- setup/devices.sh@47 -- # setup output config 00:05:07.447 21:12:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.447 21:12:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.706 21:12:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:07.706 21:12:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:07.706 21:12:31 -- setup/devices.sh@63 -- # found=1 00:05:07.706 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.706 21:12:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:07.706 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.965 21:12:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:07.965 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.965 21:12:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:07.965 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.225 21:12:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.225 21:12:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:08.225 21:12:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.225 21:12:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.225 21:12:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.225 21:12:31 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.225 21:12:31 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:08.225 21:12:31 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:08.225 21:12:31 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:08.225 21:12:31 -- setup/devices.sh@50 -- # local mount_point= 00:05:08.225 21:12:31 -- setup/devices.sh@51 -- # local test_file= 00:05:08.225 21:12:31 -- setup/devices.sh@53 -- # local found=0 00:05:08.225 21:12:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:08.225 21:12:31 -- setup/devices.sh@59 -- # local pci status 00:05:08.225 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.225 21:12:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.225 21:12:31 -- setup/devices.sh@47 -- # setup output config 00:05:08.225 21:12:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.225 21:12:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.484 21:12:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.484 21:12:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:08.484 21:12:31 -- setup/devices.sh@63 -- # found=1 00:05:08.484 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.484 21:12:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.484 21:12:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.744 21:12:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.744 21:12:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.744 21:12:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.744 21:12:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.744 21:12:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.744 21:12:32 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.744 21:12:32 -- setup/devices.sh@68 -- # return 0 00:05:08.744 21:12:32 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:08.744 21:12:32 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.744 21:12:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.744 21:12:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.744 21:12:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.744 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.744 00:05:08.744 real 0m4.390s 00:05:08.744 user 0m0.975s 00:05:08.744 sys 0m1.105s 00:05:08.744 21:12:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.744 ************************************ 00:05:08.744 END TEST nvme_mount 00:05:08.744 ************************************ 00:05:08.744 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:05:08.744 21:12:32 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:08.744 21:12:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.744 21:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.744 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:05:08.744 ************************************ 00:05:08.744 START TEST dm_mount 00:05:08.744 ************************************ 00:05:08.744 21:12:32 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:08.744 21:12:32 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:08.744 21:12:32 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:08.744 21:12:32 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:08.744 21:12:32 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:08.744 21:12:32 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.744 21:12:32 -- setup/common.sh@40 -- # local part_no=2 00:05:08.744 21:12:32 -- setup/common.sh@41 -- # local size=1073741824 00:05:08.744 21:12:32 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.744 21:12:32 -- setup/common.sh@44 -- # parts=() 00:05:08.744 21:12:32 -- setup/common.sh@44 -- # local parts 00:05:08.744 21:12:32 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.744 21:12:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.744 21:12:32 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.744 21:12:32 -- setup/common.sh@46 -- # (( part++ )) 00:05:08.744 21:12:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.744 21:12:32 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.744 21:12:32 -- setup/common.sh@46 -- # (( part++ )) 00:05:08.744 21:12:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.744 21:12:32 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:08.744 21:12:32 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.744 21:12:32 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:10.123 Creating new GPT entries in memory. 00:05:10.123 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.123 other utilities. 00:05:10.123 21:12:33 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.123 21:12:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.123 21:12:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.123 21:12:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.123 21:12:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:11.059 Creating new GPT entries in memory. 00:05:11.059 The operation has completed successfully. 00:05:11.059 21:12:34 -- setup/common.sh@57 -- # (( part++ )) 00:05:11.059 21:12:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.059 21:12:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.059 21:12:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.059 21:12:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:11.994 The operation has completed successfully. 00:05:11.994 21:12:35 -- setup/common.sh@57 -- # (( part++ )) 00:05:11.994 21:12:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.994 21:12:35 -- setup/common.sh@62 -- # wait 64332 00:05:11.994 21:12:35 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:11.994 21:12:35 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.994 21:12:35 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:11.994 21:12:35 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:11.994 21:12:35 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:11.994 21:12:35 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.994 21:12:35 -- setup/devices.sh@161 -- # break 00:05:11.994 21:12:35 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.994 21:12:35 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:11.994 21:12:35 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:11.994 21:12:35 -- setup/devices.sh@166 -- # dm=dm-0 00:05:11.994 21:12:35 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:11.994 21:12:35 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:11.994 21:12:35 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.994 21:12:35 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:11.994 21:12:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.994 21:12:35 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.994 21:12:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:11.994 21:12:35 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.994 21:12:35 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:11.994 21:12:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:11.994 21:12:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:11.994 21:12:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.994 21:12:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:11.994 21:12:35 -- setup/devices.sh@53 -- # local found=0 00:05:11.994 21:12:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.994 21:12:35 -- setup/devices.sh@56 -- # : 00:05:11.994 21:12:35 -- setup/devices.sh@59 -- # local pci status 00:05:11.994 21:12:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.994 21:12:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:11.994 21:12:35 -- setup/devices.sh@47 -- # setup output config 00:05:11.994 21:12:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.994 21:12:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.254 21:12:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:12.254 21:12:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:12.254 21:12:35 -- setup/devices.sh@63 -- # found=1 00:05:12.254 21:12:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.254 21:12:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:12.254 21:12:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.513 21:12:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:12.513 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.513 21:12:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:12.513 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.772 21:12:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.772 21:12:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:12.772 21:12:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:12.772 21:12:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:12.772 21:12:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:12.772 21:12:36 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:12.772 21:12:36 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:12.772 21:12:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:12.772 21:12:36 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:12.772 21:12:36 -- setup/devices.sh@50 -- # local mount_point= 00:05:12.772 21:12:36 -- setup/devices.sh@51 -- # local test_file= 00:05:12.772 21:12:36 -- setup/devices.sh@53 -- # local found=0 00:05:12.772 21:12:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.772 21:12:36 -- setup/devices.sh@59 -- # local pci status 00:05:12.772 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.772 21:12:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:12.772 21:12:36 -- setup/devices.sh@47 -- # setup output config 00:05:12.772 21:12:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.772 21:12:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.772 21:12:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:12.772 21:12:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:12.772 21:12:36 -- setup/devices.sh@63 -- # found=1 00:05:12.772 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.772 21:12:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:12.772 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.340 21:12:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:13.340 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.340 21:12:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:13.340 21:12:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.340 21:12:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.340 21:12:36 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.340 21:12:36 -- setup/devices.sh@68 -- # return 0 00:05:13.340 21:12:36 -- setup/devices.sh@187 -- # cleanup_dm 00:05:13.340 21:12:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.340 21:12:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.340 21:12:36 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:13.340 21:12:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.340 21:12:36 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:13.340 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.340 21:12:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.340 21:12:36 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:13.340 00:05:13.340 real 0m4.492s 00:05:13.340 user 0m0.647s 00:05:13.340 sys 0m0.774s 00:05:13.340 21:12:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.340 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:05:13.340 ************************************ 00:05:13.340 END TEST dm_mount 00:05:13.340 ************************************ 00:05:13.340 21:12:37 -- setup/devices.sh@1 -- # cleanup 00:05:13.340 21:12:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:13.340 21:12:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.340 21:12:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.340 21:12:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.340 21:12:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.340 21:12:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.599 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.599 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.599 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.599 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.599 21:12:37 -- setup/devices.sh@12 -- # cleanup_dm 00:05:13.599 21:12:37 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.599 21:12:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.599 21:12:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.599 21:12:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.599 21:12:37 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.599 21:12:37 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:13.599 ************************************ 00:05:13.599 END TEST devices 00:05:13.599 ************************************ 00:05:13.599 00:05:13.599 real 0m10.477s 00:05:13.599 user 0m2.346s 00:05:13.599 sys 0m2.464s 00:05:13.599 21:12:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.599 21:12:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 00:05:13.871 ************************************ 00:05:13.871 END TEST setup.sh 00:05:13.871 ************************************ 00:05:13.871 real 0m21.991s 00:05:13.871 user 0m7.662s 00:05:13.871 sys 0m8.786s 00:05:13.871 21:12:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.871 21:12:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.871 21:12:37 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:13.871 Hugepages 00:05:13.871 node hugesize free / total 00:05:13.871 node0 1048576kB 0 / 0 00:05:13.871 node0 2048kB 2048 / 2048 00:05:13.871 00:05:13.871 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.871 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:14.130 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:14.130 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:14.130 21:12:37 -- spdk/autotest.sh@128 -- # uname -s 00:05:14.130 21:12:37 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:14.130 21:12:37 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:14.130 21:12:37 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.697 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.956 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.956 21:12:38 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:15.892 21:12:39 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:15.892 21:12:39 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:15.892 21:12:39 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:15.892 21:12:39 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:15.892 21:12:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:15.892 21:12:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:15.892 21:12:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:15.892 21:12:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:15.892 21:12:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:15.892 21:12:39 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:15.892 21:12:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:15.892 21:12:39 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.460 Waiting for block devices as requested 00:05:16.460 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:16.460 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:16.460 21:12:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:16.460 21:12:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:16.460 21:12:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:16.460 21:12:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:16.460 21:12:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:16.460 21:12:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1552 -- # continue 00:05:16.460 21:12:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:16.460 21:12:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:16.460 21:12:40 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:16.460 21:12:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.460 21:12:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:16.460 21:12:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:16.460 21:12:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:16.460 21:12:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:16.719 21:12:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:16.719 21:12:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:16.719 21:12:40 -- common/autotest_common.sh@1552 -- # continue 00:05:16.719 21:12:40 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:16.719 21:12:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.719 21:12:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.719 21:12:40 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:16.719 21:12:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.719 21:12:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.719 21:12:40 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.287 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.546 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.546 21:12:41 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:17.546 21:12:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.546 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.546 21:12:41 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:17.546 21:12:41 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:17.546 21:12:41 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.546 21:12:41 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:17.546 21:12:41 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:17.546 21:12:41 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:17.546 21:12:41 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:17.546 21:12:41 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:17.546 21:12:41 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.546 21:12:41 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:17.546 21:12:41 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:17.546 21:12:41 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:17.546 21:12:41 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:17.546 21:12:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:17.546 21:12:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:17.546 21:12:41 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:17.546 21:12:41 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.546 21:12:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:17.546 21:12:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:17.546 21:12:41 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:17.546 21:12:41 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.546 21:12:41 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:17.546 21:12:41 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:17.546 21:12:41 -- common/autotest_common.sh@1588 -- # return 0 00:05:17.546 21:12:41 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:17.546 21:12:41 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:17.546 21:12:41 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:17.546 21:12:41 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:17.546 21:12:41 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:17.546 21:12:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.546 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.546 21:12:41 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:17.546 21:12:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.546 21:12:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.546 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.546 ************************************ 00:05:17.546 START TEST env 00:05:17.546 ************************************ 00:05:17.546 21:12:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:17.546 * Looking for test storage... 00:05:17.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:17.805 21:12:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:17.805 21:12:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:17.805 21:12:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.805 21:12:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.805 21:12:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.805 21:12:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.805 21:12:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.805 21:12:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.805 21:12:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.805 21:12:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.805 21:12:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.805 21:12:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.805 21:12:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.805 21:12:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.805 21:12:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.805 21:12:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.805 21:12:41 -- scripts/common.sh@344 -- # : 1 00:05:17.805 21:12:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.805 21:12:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.805 21:12:41 -- scripts/common.sh@364 -- # decimal 1 00:05:17.805 21:12:41 -- scripts/common.sh@352 -- # local d=1 00:05:17.805 21:12:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.805 21:12:41 -- scripts/common.sh@354 -- # echo 1 00:05:17.805 21:12:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.805 21:12:41 -- scripts/common.sh@365 -- # decimal 2 00:05:17.805 21:12:41 -- scripts/common.sh@352 -- # local d=2 00:05:17.805 21:12:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.805 21:12:41 -- scripts/common.sh@354 -- # echo 2 00:05:17.805 21:12:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.805 21:12:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.805 21:12:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.805 21:12:41 -- scripts/common.sh@367 -- # return 0 00:05:17.805 21:12:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.805 21:12:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.805 --rc genhtml_branch_coverage=1 00:05:17.805 --rc genhtml_function_coverage=1 00:05:17.805 --rc genhtml_legend=1 00:05:17.805 --rc geninfo_all_blocks=1 00:05:17.805 --rc geninfo_unexecuted_blocks=1 00:05:17.805 00:05:17.805 ' 00:05:17.805 21:12:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.805 --rc genhtml_branch_coverage=1 00:05:17.805 --rc genhtml_function_coverage=1 00:05:17.805 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 21:12:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.806 --rc genhtml_branch_coverage=1 00:05:17.806 --rc genhtml_function_coverage=1 00:05:17.806 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 21:12:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.806 --rc genhtml_branch_coverage=1 00:05:17.806 --rc genhtml_function_coverage=1 00:05:17.806 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 21:12:41 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:17.806 21:12:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.806 21:12:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.806 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.806 ************************************ 00:05:17.806 START TEST env_memory 00:05:17.806 ************************************ 00:05:17.806 21:12:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:17.806 00:05:17.806 00:05:17.806 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.806 http://cunit.sourceforge.net/ 00:05:17.806 00:05:17.806 00:05:17.806 Suite: memory 00:05:17.806 Test: alloc and free memory map ...[2024-11-28 21:12:41.457146] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:17.806 passed 00:05:17.806 Test: mem map translation ...[2024-11-28 21:12:41.488763] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:17.806 [2024-11-28 21:12:41.488806] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:17.806 [2024-11-28 21:12:41.488862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:17.806 [2024-11-28 21:12:41.488872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:17.806 passed 00:05:18.065 Test: mem map registration ...[2024-11-28 21:12:41.552653] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.065 [2024-11-28 21:12:41.552689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.065 passed 00:05:18.065 Test: mem map adjacent registrations ...passed 00:05:18.065 00:05:18.065 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.065 suites 1 1 n/a 0 0 00:05:18.065 tests 4 4 4 0 0 00:05:18.065 asserts 152 152 152 0 n/a 00:05:18.065 00:05:18.065 Elapsed time = 0.213 seconds 00:05:18.065 00:05:18.065 real 0m0.232s 00:05:18.065 user 0m0.217s 00:05:18.065 sys 0m0.009s 00:05:18.065 21:12:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.065 ************************************ 00:05:18.065 END TEST env_memory 00:05:18.065 ************************************ 00:05:18.065 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:18.065 21:12:41 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:18.065 21:12:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.065 21:12:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.065 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:18.065 ************************************ 00:05:18.065 START TEST env_vtophys 00:05:18.065 ************************************ 00:05:18.065 21:12:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:18.065 EAL: lib.eal log level changed from notice to debug 00:05:18.065 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 1 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 2 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 3 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 4 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 5 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 6 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 7 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 8 as core 0 on socket 0 00:05:18.065 EAL: Detected lcore 9 as core 0 on socket 0 00:05:18.065 EAL: Maximum logical cores by configuration: 128 00:05:18.065 EAL: Detected CPU lcores: 10 00:05:18.065 EAL: Detected NUMA nodes: 1 00:05:18.065 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:18.065 EAL: Detected shared linkage of DPDK 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:18.065 EAL: Registered [vdev] bus. 00:05:18.065 EAL: bus.vdev log level changed from disabled to notice 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:18.065 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:18.065 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:18.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:18.066 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:18.066 EAL: No shared files mode enabled, IPC will be disabled 00:05:18.066 EAL: No shared files mode enabled, IPC is disabled 00:05:18.066 EAL: Selected IOVA mode 'PA' 00:05:18.066 EAL: Probing VFIO support... 00:05:18.066 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:18.066 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:18.066 EAL: Ask a virtual area of 0x2e000 bytes 00:05:18.066 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:18.066 EAL: Setting up physically contiguous memory... 00:05:18.066 EAL: Setting maximum number of open files to 524288 00:05:18.066 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:18.066 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:18.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.066 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:18.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.066 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:18.066 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:18.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.066 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:18.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.066 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:18.066 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:18.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.066 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:18.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.066 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:18.066 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:18.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.066 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:18.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.066 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:18.066 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:18.066 EAL: Hugepages will be freed exactly as allocated. 00:05:18.066 EAL: No shared files mode enabled, IPC is disabled 00:05:18.066 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: TSC frequency is ~2200000 KHz 00:05:18.325 EAL: Main lcore 0 is ready (tid=7f940eddfa00;cpuset=[0]) 00:05:18.325 EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.325 EAL: Restoring previous memory policy: 0 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was expanded by 2MB 00:05:18.325 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:18.325 EAL: Mem event callback 'spdk:(nil)' registered 00:05:18.325 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:18.325 00:05:18.325 00:05:18.325 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.325 http://cunit.sourceforge.net/ 00:05:18.325 00:05:18.325 00:05:18.325 Suite: components_suite 00:05:18.325 Test: vtophys_malloc_test ...passed 00:05:18.325 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.325 EAL: Restoring previous memory policy: 4 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was expanded by 4MB 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was shrunk by 4MB 00:05:18.325 EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.325 EAL: Restoring previous memory policy: 4 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was expanded by 6MB 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was shrunk by 6MB 00:05:18.325 EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.325 EAL: Restoring previous memory policy: 4 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was expanded by 10MB 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was shrunk by 10MB 00:05:18.325 EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.325 EAL: Restoring previous memory policy: 4 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was expanded by 18MB 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was shrunk by 18MB 00:05:18.325 EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.325 EAL: Restoring previous memory policy: 4 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was expanded by 34MB 00:05:18.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.325 EAL: request: mp_malloc_sync 00:05:18.325 EAL: No shared files mode enabled, IPC is disabled 00:05:18.325 EAL: Heap on socket 0 was shrunk by 34MB 00:05:18.325 EAL: Trying to obtain current memory policy. 00:05:18.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.326 EAL: Restoring previous memory policy: 4 00:05:18.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.326 EAL: request: mp_malloc_sync 00:05:18.326 EAL: No shared files mode enabled, IPC is disabled 00:05:18.326 EAL: Heap on socket 0 was expanded by 66MB 00:05:18.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.326 EAL: request: mp_malloc_sync 00:05:18.326 EAL: No shared files mode enabled, IPC is disabled 00:05:18.326 EAL: Heap on socket 0 was shrunk by 66MB 00:05:18.326 EAL: Trying to obtain current memory policy. 00:05:18.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.326 EAL: Restoring previous memory policy: 4 00:05:18.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.326 EAL: request: mp_malloc_sync 00:05:18.326 EAL: No shared files mode enabled, IPC is disabled 00:05:18.326 EAL: Heap on socket 0 was expanded by 130MB 00:05:18.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.326 EAL: request: mp_malloc_sync 00:05:18.326 EAL: No shared files mode enabled, IPC is disabled 00:05:18.326 EAL: Heap on socket 0 was shrunk by 130MB 00:05:18.326 EAL: Trying to obtain current memory policy. 00:05:18.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.326 EAL: Restoring previous memory policy: 4 00:05:18.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.326 EAL: request: mp_malloc_sync 00:05:18.326 EAL: No shared files mode enabled, IPC is disabled 00:05:18.326 EAL: Heap on socket 0 was expanded by 258MB 00:05:18.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.326 EAL: request: mp_malloc_sync 00:05:18.326 EAL: No shared files mode enabled, IPC is disabled 00:05:18.326 EAL: Heap on socket 0 was shrunk by 258MB 00:05:18.326 EAL: Trying to obtain current memory policy. 00:05:18.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.585 EAL: Restoring previous memory policy: 4 00:05:18.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.585 EAL: request: mp_malloc_sync 00:05:18.585 EAL: No shared files mode enabled, IPC is disabled 00:05:18.585 EAL: Heap on socket 0 was expanded by 514MB 00:05:18.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.585 EAL: request: mp_malloc_sync 00:05:18.585 EAL: No shared files mode enabled, IPC is disabled 00:05:18.585 EAL: Heap on socket 0 was shrunk by 514MB 00:05:18.585 EAL: Trying to obtain current memory policy. 00:05:18.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.844 EAL: Restoring previous memory policy: 4 00:05:18.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.844 EAL: request: mp_malloc_sync 00:05:18.844 EAL: No shared files mode enabled, IPC is disabled 00:05:18.844 EAL: Heap on socket 0 was expanded by 1026MB 00:05:18.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.844 EAL: request: mp_malloc_sync 00:05:18.844 passed 00:05:18.844 00:05:18.844 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.844 suites 1 1 n/a 0 0 00:05:18.844 tests 2 2 2 0 0 00:05:18.844 asserts 5225 5225 5225 0 n/a 00:05:18.844 00:05:18.844 Elapsed time = 0.674 seconds 00:05:18.844 EAL: No shared files mode enabled, IPC is disabled 00:05:18.844 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:18.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.844 EAL: request: mp_malloc_sync 00:05:18.844 EAL: No shared files mode enabled, IPC is disabled 00:05:18.844 EAL: Heap on socket 0 was shrunk by 2MB 00:05:18.844 EAL: No shared files mode enabled, IPC is disabled 00:05:18.844 EAL: No shared files mode enabled, IPC is disabled 00:05:18.844 EAL: No shared files mode enabled, IPC is disabled 00:05:18.844 ************************************ 00:05:18.844 END TEST env_vtophys 00:05:18.844 ************************************ 00:05:18.844 00:05:18.844 real 0m0.873s 00:05:18.844 user 0m0.436s 00:05:18.844 sys 0m0.297s 00:05:18.844 21:12:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.844 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 21:12:42 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.103 21:12:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.103 21:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.103 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 ************************************ 00:05:19.103 START TEST env_pci 00:05:19.103 ************************************ 00:05:19.103 21:12:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.103 00:05:19.103 00:05:19.103 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.103 http://cunit.sourceforge.net/ 00:05:19.103 00:05:19.103 00:05:19.103 Suite: pci 00:05:19.103 Test: pci_hook ...[2024-11-28 21:12:42.626292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65460 has claimed it 00:05:19.103 passed 00:05:19.103 00:05:19.103 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.103 suites 1 1 n/a 0 0 00:05:19.103 tests 1 1 1 0 0 00:05:19.103 asserts 25 25 25 0 n/a 00:05:19.103 00:05:19.103 Elapsed time = 0.002 seconds 00:05:19.103 EAL: Cannot find device (10000:00:01.0) 00:05:19.103 EAL: Failed to attach device on primary process 00:05:19.103 00:05:19.103 real 0m0.020s 00:05:19.103 user 0m0.011s 00:05:19.103 sys 0m0.008s 00:05:19.103 21:12:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.103 ************************************ 00:05:19.103 END TEST env_pci 00:05:19.103 ************************************ 00:05:19.103 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 21:12:42 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.103 21:12:42 -- env/env.sh@15 -- # uname 00:05:19.103 21:12:42 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.103 21:12:42 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.103 21:12:42 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.103 21:12:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:19.104 21:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.104 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.104 ************************************ 00:05:19.104 START TEST env_dpdk_post_init 00:05:19.104 ************************************ 00:05:19.104 21:12:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.104 EAL: Detected CPU lcores: 10 00:05:19.104 EAL: Detected NUMA nodes: 1 00:05:19.104 EAL: Detected shared linkage of DPDK 00:05:19.104 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.104 EAL: Selected IOVA mode 'PA' 00:05:19.104 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.363 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:19.363 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:19.363 Starting DPDK initialization... 00:05:19.363 Starting SPDK post initialization... 00:05:19.363 SPDK NVMe probe 00:05:19.363 Attaching to 0000:00:06.0 00:05:19.363 Attaching to 0000:00:07.0 00:05:19.363 Attached to 0000:00:06.0 00:05:19.363 Attached to 0000:00:07.0 00:05:19.363 Cleaning up... 00:05:19.363 00:05:19.363 real 0m0.176s 00:05:19.363 user 0m0.044s 00:05:19.363 sys 0m0.032s 00:05:19.363 21:12:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.363 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 ************************************ 00:05:19.363 END TEST env_dpdk_post_init 00:05:19.363 ************************************ 00:05:19.363 21:12:42 -- env/env.sh@26 -- # uname 00:05:19.363 21:12:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:19.363 21:12:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.363 21:12:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.363 21:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.363 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 ************************************ 00:05:19.363 START TEST env_mem_callbacks 00:05:19.363 ************************************ 00:05:19.363 21:12:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.363 EAL: Detected CPU lcores: 10 00:05:19.363 EAL: Detected NUMA nodes: 1 00:05:19.363 EAL: Detected shared linkage of DPDK 00:05:19.363 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.363 EAL: Selected IOVA mode 'PA' 00:05:19.363 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.363 00:05:19.363 00:05:19.363 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.363 http://cunit.sourceforge.net/ 00:05:19.363 00:05:19.363 00:05:19.363 Suite: memory 00:05:19.363 Test: test ... 00:05:19.363 register 0x200000200000 2097152 00:05:19.363 malloc 3145728 00:05:19.363 register 0x200000400000 4194304 00:05:19.363 buf 0x200000500000 len 3145728 PASSED 00:05:19.363 malloc 64 00:05:19.363 buf 0x2000004fff40 len 64 PASSED 00:05:19.363 malloc 4194304 00:05:19.363 register 0x200000800000 6291456 00:05:19.363 buf 0x200000a00000 len 4194304 PASSED 00:05:19.363 free 0x200000500000 3145728 00:05:19.363 free 0x2000004fff40 64 00:05:19.363 unregister 0x200000400000 4194304 PASSED 00:05:19.363 free 0x200000a00000 4194304 00:05:19.363 unregister 0x200000800000 6291456 PASSED 00:05:19.363 malloc 8388608 00:05:19.363 register 0x200000400000 10485760 00:05:19.363 buf 0x200000600000 len 8388608 PASSED 00:05:19.363 free 0x200000600000 8388608 00:05:19.363 unregister 0x200000400000 10485760 PASSED 00:05:19.363 passed 00:05:19.363 00:05:19.363 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.363 suites 1 1 n/a 0 0 00:05:19.363 tests 1 1 1 0 0 00:05:19.363 asserts 15 15 15 0 n/a 00:05:19.363 00:05:19.363 Elapsed time = 0.008 seconds 00:05:19.363 00:05:19.363 real 0m0.139s 00:05:19.363 user 0m0.013s 00:05:19.363 sys 0m0.025s 00:05:19.363 21:12:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.363 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 ************************************ 00:05:19.363 END TEST env_mem_callbacks 00:05:19.363 ************************************ 00:05:19.363 00:05:19.363 real 0m1.883s 00:05:19.363 user 0m0.931s 00:05:19.363 sys 0m0.592s 00:05:19.363 21:12:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.363 ************************************ 00:05:19.363 END TEST env 00:05:19.363 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 ************************************ 00:05:19.622 21:12:43 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:19.623 21:12:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.623 21:12:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.623 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.623 ************************************ 00:05:19.623 START TEST rpc 00:05:19.623 ************************************ 00:05:19.623 21:12:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:19.623 * Looking for test storage... 00:05:19.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.623 21:12:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.623 21:12:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.623 21:12:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.623 21:12:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.623 21:12:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.623 21:12:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.623 21:12:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.623 21:12:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.623 21:12:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.623 21:12:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.623 21:12:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.623 21:12:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.623 21:12:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.623 21:12:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.623 21:12:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.623 21:12:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.623 21:12:43 -- scripts/common.sh@344 -- # : 1 00:05:19.623 21:12:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.623 21:12:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.623 21:12:43 -- scripts/common.sh@364 -- # decimal 1 00:05:19.623 21:12:43 -- scripts/common.sh@352 -- # local d=1 00:05:19.623 21:12:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.623 21:12:43 -- scripts/common.sh@354 -- # echo 1 00:05:19.623 21:12:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.623 21:12:43 -- scripts/common.sh@365 -- # decimal 2 00:05:19.623 21:12:43 -- scripts/common.sh@352 -- # local d=2 00:05:19.623 21:12:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.623 21:12:43 -- scripts/common.sh@354 -- # echo 2 00:05:19.623 21:12:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.623 21:12:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.623 21:12:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.623 21:12:43 -- scripts/common.sh@367 -- # return 0 00:05:19.623 21:12:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.623 21:12:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.623 --rc genhtml_branch_coverage=1 00:05:19.623 --rc genhtml_function_coverage=1 00:05:19.623 --rc genhtml_legend=1 00:05:19.623 --rc geninfo_all_blocks=1 00:05:19.623 --rc geninfo_unexecuted_blocks=1 00:05:19.623 00:05:19.623 ' 00:05:19.623 21:12:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.623 --rc genhtml_branch_coverage=1 00:05:19.623 --rc genhtml_function_coverage=1 00:05:19.623 --rc genhtml_legend=1 00:05:19.623 --rc geninfo_all_blocks=1 00:05:19.623 --rc geninfo_unexecuted_blocks=1 00:05:19.623 00:05:19.623 ' 00:05:19.623 21:12:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.623 --rc genhtml_branch_coverage=1 00:05:19.623 --rc genhtml_function_coverage=1 00:05:19.623 --rc genhtml_legend=1 00:05:19.623 --rc geninfo_all_blocks=1 00:05:19.623 --rc geninfo_unexecuted_blocks=1 00:05:19.623 00:05:19.623 ' 00:05:19.623 21:12:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.623 --rc genhtml_branch_coverage=1 00:05:19.623 --rc genhtml_function_coverage=1 00:05:19.623 --rc genhtml_legend=1 00:05:19.623 --rc geninfo_all_blocks=1 00:05:19.623 --rc geninfo_unexecuted_blocks=1 00:05:19.623 00:05:19.623 ' 00:05:19.623 21:12:43 -- rpc/rpc.sh@65 -- # spdk_pid=65582 00:05:19.623 21:12:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.623 21:12:43 -- rpc/rpc.sh@67 -- # waitforlisten 65582 00:05:19.623 21:12:43 -- common/autotest_common.sh@829 -- # '[' -z 65582 ']' 00:05:19.623 21:12:43 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:19.623 21:12:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.623 21:12:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.623 21:12:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.623 21:12:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.623 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.623 [2024-11-28 21:12:43.349405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:19.623 [2024-11-28 21:12:43.349521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65582 ] 00:05:19.883 [2024-11-28 21:12:43.490309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.883 [2024-11-28 21:12:43.529086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.883 [2024-11-28 21:12:43.529267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:19.883 [2024-11-28 21:12:43.529284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65582' to capture a snapshot of events at runtime. 00:05:19.883 [2024-11-28 21:12:43.529295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65582 for offline analysis/debug. 00:05:19.883 [2024-11-28 21:12:43.529329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.820 21:12:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.820 21:12:44 -- common/autotest_common.sh@862 -- # return 0 00:05:20.820 21:12:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.820 21:12:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.820 21:12:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:20.820 21:12:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:20.820 21:12:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.820 21:12:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.820 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.820 ************************************ 00:05:20.820 START TEST rpc_integrity 00:05:20.820 ************************************ 00:05:20.820 21:12:44 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:20.820 21:12:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.820 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.820 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.820 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.820 21:12:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.820 21:12:44 -- rpc/rpc.sh@13 -- # jq length 00:05:20.820 21:12:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.820 21:12:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.820 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.820 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.820 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.820 21:12:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:20.820 21:12:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.820 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.820 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.820 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.820 21:12:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.820 { 00:05:20.820 "name": "Malloc0", 00:05:20.820 "aliases": [ 00:05:20.820 "db82bfe9-c50f-4652-97f8-cbfe49943f81" 00:05:20.820 ], 00:05:20.820 "product_name": "Malloc disk", 00:05:20.820 "block_size": 512, 00:05:20.820 "num_blocks": 16384, 00:05:20.820 "uuid": "db82bfe9-c50f-4652-97f8-cbfe49943f81", 00:05:20.820 "assigned_rate_limits": { 00:05:20.820 "rw_ios_per_sec": 0, 00:05:20.820 "rw_mbytes_per_sec": 0, 00:05:20.820 "r_mbytes_per_sec": 0, 00:05:20.820 "w_mbytes_per_sec": 0 00:05:20.820 }, 00:05:20.820 "claimed": false, 00:05:20.820 "zoned": false, 00:05:20.820 "supported_io_types": { 00:05:20.820 "read": true, 00:05:20.820 "write": true, 00:05:20.820 "unmap": true, 00:05:20.820 "write_zeroes": true, 00:05:20.820 "flush": true, 00:05:20.820 "reset": true, 00:05:20.820 "compare": false, 00:05:20.820 "compare_and_write": false, 00:05:20.820 "abort": true, 00:05:20.820 "nvme_admin": false, 00:05:20.820 "nvme_io": false 00:05:20.820 }, 00:05:20.820 "memory_domains": [ 00:05:20.820 { 00:05:20.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.820 "dma_device_type": 2 00:05:20.820 } 00:05:20.820 ], 00:05:20.820 "driver_specific": {} 00:05:20.820 } 00:05:20.820 ]' 00:05:20.820 21:12:44 -- rpc/rpc.sh@17 -- # jq length 00:05:20.820 21:12:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.820 21:12:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:20.820 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.821 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.821 [2024-11-28 21:12:44.520271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:20.821 [2024-11-28 21:12:44.520344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.821 [2024-11-28 21:12:44.520391] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fed790 00:05:20.821 [2024-11-28 21:12:44.520399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.821 [2024-11-28 21:12:44.521785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.821 [2024-11-28 21:12:44.521831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.821 Passthru0 00:05:20.821 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.821 21:12:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.821 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.821 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.821 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.821 21:12:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.821 { 00:05:20.821 "name": "Malloc0", 00:05:20.821 "aliases": [ 00:05:20.821 "db82bfe9-c50f-4652-97f8-cbfe49943f81" 00:05:20.821 ], 00:05:20.821 "product_name": "Malloc disk", 00:05:20.821 "block_size": 512, 00:05:20.821 "num_blocks": 16384, 00:05:20.821 "uuid": "db82bfe9-c50f-4652-97f8-cbfe49943f81", 00:05:20.821 "assigned_rate_limits": { 00:05:20.821 "rw_ios_per_sec": 0, 00:05:20.821 "rw_mbytes_per_sec": 0, 00:05:20.821 "r_mbytes_per_sec": 0, 00:05:20.821 "w_mbytes_per_sec": 0 00:05:20.821 }, 00:05:20.821 "claimed": true, 00:05:20.821 "claim_type": "exclusive_write", 00:05:20.821 "zoned": false, 00:05:20.821 "supported_io_types": { 00:05:20.821 "read": true, 00:05:20.821 "write": true, 00:05:20.821 "unmap": true, 00:05:20.821 "write_zeroes": true, 00:05:20.821 "flush": true, 00:05:20.821 "reset": true, 00:05:20.821 "compare": false, 00:05:20.821 "compare_and_write": false, 00:05:20.821 "abort": true, 00:05:20.821 "nvme_admin": false, 00:05:20.821 "nvme_io": false 00:05:20.821 }, 00:05:20.821 "memory_domains": [ 00:05:20.821 { 00:05:20.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.821 "dma_device_type": 2 00:05:20.821 } 00:05:20.821 ], 00:05:20.821 "driver_specific": {} 00:05:20.821 }, 00:05:20.821 { 00:05:20.821 "name": "Passthru0", 00:05:20.821 "aliases": [ 00:05:20.821 "4c9698b0-09cd-5542-b8d1-921466347dd3" 00:05:20.821 ], 00:05:20.821 "product_name": "passthru", 00:05:20.821 "block_size": 512, 00:05:20.821 "num_blocks": 16384, 00:05:20.821 "uuid": "4c9698b0-09cd-5542-b8d1-921466347dd3", 00:05:20.821 "assigned_rate_limits": { 00:05:20.821 "rw_ios_per_sec": 0, 00:05:20.821 "rw_mbytes_per_sec": 0, 00:05:20.821 "r_mbytes_per_sec": 0, 00:05:20.821 "w_mbytes_per_sec": 0 00:05:20.821 }, 00:05:20.821 "claimed": false, 00:05:20.821 "zoned": false, 00:05:20.821 "supported_io_types": { 00:05:20.821 "read": true, 00:05:20.821 "write": true, 00:05:20.821 "unmap": true, 00:05:20.821 "write_zeroes": true, 00:05:20.821 "flush": true, 00:05:20.821 "reset": true, 00:05:20.821 "compare": false, 00:05:20.821 "compare_and_write": false, 00:05:20.821 "abort": true, 00:05:20.821 "nvme_admin": false, 00:05:20.821 "nvme_io": false 00:05:20.821 }, 00:05:20.821 "memory_domains": [ 00:05:20.821 { 00:05:20.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.821 "dma_device_type": 2 00:05:20.821 } 00:05:20.821 ], 00:05:20.821 "driver_specific": { 00:05:20.821 "passthru": { 00:05:20.821 "name": "Passthru0", 00:05:20.821 "base_bdev_name": "Malloc0" 00:05:20.821 } 00:05:20.821 } 00:05:20.821 } 00:05:20.821 ]' 00:05:20.821 21:12:44 -- rpc/rpc.sh@21 -- # jq length 00:05:21.080 21:12:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.080 21:12:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.080 21:12:44 -- rpc/rpc.sh@26 -- # jq length 00:05:21.080 21:12:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.080 00:05:21.080 real 0m0.296s 00:05:21.080 user 0m0.203s 00:05:21.080 sys 0m0.031s 00:05:21.080 21:12:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 ************************************ 00:05:21.080 END TEST rpc_integrity 00:05:21.080 ************************************ 00:05:21.080 21:12:44 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.080 21:12:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.080 21:12:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 ************************************ 00:05:21.080 START TEST rpc_plugins 00:05:21.080 ************************************ 00:05:21.080 21:12:44 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:21.080 21:12:44 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.080 21:12:44 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.080 { 00:05:21.080 "name": "Malloc1", 00:05:21.080 "aliases": [ 00:05:21.080 "1118aecf-9712-45ad-9d24-a00259dfd722" 00:05:21.080 ], 00:05:21.080 "product_name": "Malloc disk", 00:05:21.080 "block_size": 4096, 00:05:21.080 "num_blocks": 256, 00:05:21.080 "uuid": "1118aecf-9712-45ad-9d24-a00259dfd722", 00:05:21.080 "assigned_rate_limits": { 00:05:21.080 "rw_ios_per_sec": 0, 00:05:21.080 "rw_mbytes_per_sec": 0, 00:05:21.080 "r_mbytes_per_sec": 0, 00:05:21.080 "w_mbytes_per_sec": 0 00:05:21.080 }, 00:05:21.080 "claimed": false, 00:05:21.080 "zoned": false, 00:05:21.080 "supported_io_types": { 00:05:21.080 "read": true, 00:05:21.080 "write": true, 00:05:21.080 "unmap": true, 00:05:21.080 "write_zeroes": true, 00:05:21.080 "flush": true, 00:05:21.080 "reset": true, 00:05:21.080 "compare": false, 00:05:21.080 "compare_and_write": false, 00:05:21.080 "abort": true, 00:05:21.080 "nvme_admin": false, 00:05:21.080 "nvme_io": false 00:05:21.080 }, 00:05:21.080 "memory_domains": [ 00:05:21.080 { 00:05:21.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.080 "dma_device_type": 2 00:05:21.080 } 00:05:21.080 ], 00:05:21.080 "driver_specific": {} 00:05:21.080 } 00:05:21.080 ]' 00:05:21.080 21:12:44 -- rpc/rpc.sh@32 -- # jq length 00:05:21.080 21:12:44 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.080 21:12:44 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.080 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.080 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.080 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.080 21:12:44 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.340 21:12:44 -- rpc/rpc.sh@36 -- # jq length 00:05:21.340 21:12:44 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.340 00:05:21.340 real 0m0.156s 00:05:21.340 user 0m0.102s 00:05:21.340 sys 0m0.018s 00:05:21.340 21:12:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.340 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.340 ************************************ 00:05:21.340 END TEST rpc_plugins 00:05:21.340 ************************************ 00:05:21.340 21:12:44 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.340 21:12:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.340 21:12:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.340 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.340 ************************************ 00:05:21.340 START TEST rpc_trace_cmd_test 00:05:21.340 ************************************ 00:05:21.340 21:12:44 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:21.340 21:12:44 -- rpc/rpc.sh@40 -- # local info 00:05:21.340 21:12:44 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.340 21:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.340 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.340 21:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.340 21:12:44 -- rpc/rpc.sh@42 -- # info='{ 00:05:21.340 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65582", 00:05:21.340 "tpoint_group_mask": "0x8", 00:05:21.340 "iscsi_conn": { 00:05:21.340 "mask": "0x2", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "scsi": { 00:05:21.340 "mask": "0x4", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "bdev": { 00:05:21.340 "mask": "0x8", 00:05:21.340 "tpoint_mask": "0xffffffffffffffff" 00:05:21.340 }, 00:05:21.340 "nvmf_rdma": { 00:05:21.340 "mask": "0x10", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "nvmf_tcp": { 00:05:21.340 "mask": "0x20", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "ftl": { 00:05:21.340 "mask": "0x40", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "blobfs": { 00:05:21.340 "mask": "0x80", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "dsa": { 00:05:21.340 "mask": "0x200", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "thread": { 00:05:21.340 "mask": "0x400", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "nvme_pcie": { 00:05:21.340 "mask": "0x800", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "iaa": { 00:05:21.340 "mask": "0x1000", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "nvme_tcp": { 00:05:21.340 "mask": "0x2000", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 }, 00:05:21.340 "bdev_nvme": { 00:05:21.340 "mask": "0x4000", 00:05:21.340 "tpoint_mask": "0x0" 00:05:21.340 } 00:05:21.340 }' 00:05:21.340 21:12:44 -- rpc/rpc.sh@43 -- # jq length 00:05:21.340 21:12:44 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:21.340 21:12:44 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:21.340 21:12:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:21.340 21:12:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:21.598 21:12:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:21.598 21:12:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:21.598 21:12:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:21.598 21:12:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:21.598 21:12:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:21.598 00:05:21.598 real 0m0.277s 00:05:21.598 user 0m0.240s 00:05:21.598 sys 0m0.028s 00:05:21.598 21:12:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.598 ************************************ 00:05:21.598 END TEST rpc_trace_cmd_test 00:05:21.598 ************************************ 00:05:21.598 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 21:12:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:21.598 21:12:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:21.598 21:12:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:21.598 21:12:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.598 21:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.598 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 ************************************ 00:05:21.598 START TEST rpc_daemon_integrity 00:05:21.598 ************************************ 00:05:21.598 21:12:45 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:21.598 21:12:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.598 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.598 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.598 21:12:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.598 21:12:45 -- rpc/rpc.sh@13 -- # jq length 00:05:21.598 21:12:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.598 21:12:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.598 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.598 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.598 21:12:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:21.598 21:12:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.598 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.598 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.598 21:12:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.598 { 00:05:21.598 "name": "Malloc2", 00:05:21.598 "aliases": [ 00:05:21.598 "735c0449-458a-4104-8809-7b73f4ad6fc7" 00:05:21.598 ], 00:05:21.598 "product_name": "Malloc disk", 00:05:21.598 "block_size": 512, 00:05:21.598 "num_blocks": 16384, 00:05:21.598 "uuid": "735c0449-458a-4104-8809-7b73f4ad6fc7", 00:05:21.598 "assigned_rate_limits": { 00:05:21.598 "rw_ios_per_sec": 0, 00:05:21.598 "rw_mbytes_per_sec": 0, 00:05:21.598 "r_mbytes_per_sec": 0, 00:05:21.598 "w_mbytes_per_sec": 0 00:05:21.598 }, 00:05:21.598 "claimed": false, 00:05:21.598 "zoned": false, 00:05:21.598 "supported_io_types": { 00:05:21.598 "read": true, 00:05:21.598 "write": true, 00:05:21.598 "unmap": true, 00:05:21.598 "write_zeroes": true, 00:05:21.598 "flush": true, 00:05:21.598 "reset": true, 00:05:21.598 "compare": false, 00:05:21.598 "compare_and_write": false, 00:05:21.598 "abort": true, 00:05:21.598 "nvme_admin": false, 00:05:21.598 "nvme_io": false 00:05:21.598 }, 00:05:21.598 "memory_domains": [ 00:05:21.598 { 00:05:21.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.598 "dma_device_type": 2 00:05:21.598 } 00:05:21.598 ], 00:05:21.598 "driver_specific": {} 00:05:21.598 } 00:05:21.598 ]' 00:05:21.598 21:12:45 -- rpc/rpc.sh@17 -- # jq length 00:05:21.857 21:12:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.857 21:12:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:21.857 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.857 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.857 [2024-11-28 21:12:45.397565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:21.857 [2024-11-28 21:12:45.397637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.857 [2024-11-28 21:12:45.397669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fdefe0 00:05:21.857 [2024-11-28 21:12:45.397677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.857 [2024-11-28 21:12:45.398954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.857 [2024-11-28 21:12:45.399000] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.857 Passthru0 00:05:21.857 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.857 21:12:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.857 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.857 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.857 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.857 21:12:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.857 { 00:05:21.857 "name": "Malloc2", 00:05:21.857 "aliases": [ 00:05:21.857 "735c0449-458a-4104-8809-7b73f4ad6fc7" 00:05:21.857 ], 00:05:21.857 "product_name": "Malloc disk", 00:05:21.857 "block_size": 512, 00:05:21.857 "num_blocks": 16384, 00:05:21.857 "uuid": "735c0449-458a-4104-8809-7b73f4ad6fc7", 00:05:21.857 "assigned_rate_limits": { 00:05:21.857 "rw_ios_per_sec": 0, 00:05:21.857 "rw_mbytes_per_sec": 0, 00:05:21.857 "r_mbytes_per_sec": 0, 00:05:21.857 "w_mbytes_per_sec": 0 00:05:21.857 }, 00:05:21.857 "claimed": true, 00:05:21.857 "claim_type": "exclusive_write", 00:05:21.857 "zoned": false, 00:05:21.857 "supported_io_types": { 00:05:21.857 "read": true, 00:05:21.857 "write": true, 00:05:21.857 "unmap": true, 00:05:21.857 "write_zeroes": true, 00:05:21.857 "flush": true, 00:05:21.857 "reset": true, 00:05:21.857 "compare": false, 00:05:21.857 "compare_and_write": false, 00:05:21.857 "abort": true, 00:05:21.857 "nvme_admin": false, 00:05:21.857 "nvme_io": false 00:05:21.857 }, 00:05:21.857 "memory_domains": [ 00:05:21.857 { 00:05:21.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.857 "dma_device_type": 2 00:05:21.857 } 00:05:21.857 ], 00:05:21.857 "driver_specific": {} 00:05:21.857 }, 00:05:21.857 { 00:05:21.857 "name": "Passthru0", 00:05:21.857 "aliases": [ 00:05:21.857 "99f9ac1d-3201-550d-883d-2d169866f4b7" 00:05:21.857 ], 00:05:21.857 "product_name": "passthru", 00:05:21.857 "block_size": 512, 00:05:21.857 "num_blocks": 16384, 00:05:21.857 "uuid": "99f9ac1d-3201-550d-883d-2d169866f4b7", 00:05:21.857 "assigned_rate_limits": { 00:05:21.857 "rw_ios_per_sec": 0, 00:05:21.857 "rw_mbytes_per_sec": 0, 00:05:21.857 "r_mbytes_per_sec": 0, 00:05:21.857 "w_mbytes_per_sec": 0 00:05:21.857 }, 00:05:21.857 "claimed": false, 00:05:21.857 "zoned": false, 00:05:21.857 "supported_io_types": { 00:05:21.857 "read": true, 00:05:21.857 "write": true, 00:05:21.857 "unmap": true, 00:05:21.857 "write_zeroes": true, 00:05:21.857 "flush": true, 00:05:21.857 "reset": true, 00:05:21.857 "compare": false, 00:05:21.857 "compare_and_write": false, 00:05:21.857 "abort": true, 00:05:21.857 "nvme_admin": false, 00:05:21.857 "nvme_io": false 00:05:21.857 }, 00:05:21.857 "memory_domains": [ 00:05:21.857 { 00:05:21.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.857 "dma_device_type": 2 00:05:21.857 } 00:05:21.857 ], 00:05:21.857 "driver_specific": { 00:05:21.857 "passthru": { 00:05:21.857 "name": "Passthru0", 00:05:21.857 "base_bdev_name": "Malloc2" 00:05:21.857 } 00:05:21.857 } 00:05:21.857 } 00:05:21.857 ]' 00:05:21.857 21:12:45 -- rpc/rpc.sh@21 -- # jq length 00:05:21.857 21:12:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.857 21:12:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.857 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.857 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.857 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.857 21:12:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:21.857 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.857 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.857 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.857 21:12:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.857 21:12:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.857 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.857 21:12:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.857 21:12:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.857 21:12:45 -- rpc/rpc.sh@26 -- # jq length 00:05:21.857 21:12:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.857 00:05:21.857 real 0m0.314s 00:05:21.857 user 0m0.205s 00:05:21.857 sys 0m0.038s 00:05:21.857 21:12:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.857 ************************************ 00:05:21.857 END TEST rpc_daemon_integrity 00:05:21.857 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.857 ************************************ 00:05:22.117 21:12:45 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.117 21:12:45 -- rpc/rpc.sh@84 -- # killprocess 65582 00:05:22.117 21:12:45 -- common/autotest_common.sh@936 -- # '[' -z 65582 ']' 00:05:22.117 21:12:45 -- common/autotest_common.sh@940 -- # kill -0 65582 00:05:22.117 21:12:45 -- common/autotest_common.sh@941 -- # uname 00:05:22.117 21:12:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.117 21:12:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65582 00:05:22.117 21:12:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.117 21:12:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.117 killing process with pid 65582 00:05:22.117 21:12:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65582' 00:05:22.117 21:12:45 -- common/autotest_common.sh@955 -- # kill 65582 00:05:22.117 21:12:45 -- common/autotest_common.sh@960 -- # wait 65582 00:05:22.376 00:05:22.376 real 0m2.726s 00:05:22.376 user 0m3.685s 00:05:22.376 sys 0m0.564s 00:05:22.376 21:12:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.376 ************************************ 00:05:22.376 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.376 END TEST rpc 00:05:22.376 ************************************ 00:05:22.376 21:12:45 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:22.376 21:12:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.376 21:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.376 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.376 ************************************ 00:05:22.376 START TEST rpc_client 00:05:22.376 ************************************ 00:05:22.376 21:12:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:22.376 * Looking for test storage... 00:05:22.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:22.376 21:12:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.376 21:12:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.376 21:12:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.376 21:12:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.376 21:12:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.376 21:12:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.376 21:12:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.376 21:12:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.376 21:12:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.376 21:12:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.376 21:12:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.376 21:12:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.376 21:12:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.376 21:12:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.376 21:12:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.376 21:12:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.376 21:12:46 -- scripts/common.sh@344 -- # : 1 00:05:22.376 21:12:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.376 21:12:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.376 21:12:46 -- scripts/common.sh@364 -- # decimal 1 00:05:22.376 21:12:46 -- scripts/common.sh@352 -- # local d=1 00:05:22.376 21:12:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.376 21:12:46 -- scripts/common.sh@354 -- # echo 1 00:05:22.376 21:12:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.376 21:12:46 -- scripts/common.sh@365 -- # decimal 2 00:05:22.376 21:12:46 -- scripts/common.sh@352 -- # local d=2 00:05:22.376 21:12:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.376 21:12:46 -- scripts/common.sh@354 -- # echo 2 00:05:22.376 21:12:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.376 21:12:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.376 21:12:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.377 21:12:46 -- scripts/common.sh@367 -- # return 0 00:05:22.377 21:12:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.377 21:12:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.377 --rc genhtml_branch_coverage=1 00:05:22.377 --rc genhtml_function_coverage=1 00:05:22.377 --rc genhtml_legend=1 00:05:22.377 --rc geninfo_all_blocks=1 00:05:22.377 --rc geninfo_unexecuted_blocks=1 00:05:22.377 00:05:22.377 ' 00:05:22.377 21:12:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.377 --rc genhtml_branch_coverage=1 00:05:22.377 --rc genhtml_function_coverage=1 00:05:22.377 --rc genhtml_legend=1 00:05:22.377 --rc geninfo_all_blocks=1 00:05:22.377 --rc geninfo_unexecuted_blocks=1 00:05:22.377 00:05:22.377 ' 00:05:22.377 21:12:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.377 --rc genhtml_branch_coverage=1 00:05:22.377 --rc genhtml_function_coverage=1 00:05:22.377 --rc genhtml_legend=1 00:05:22.377 --rc geninfo_all_blocks=1 00:05:22.377 --rc geninfo_unexecuted_blocks=1 00:05:22.377 00:05:22.377 ' 00:05:22.377 21:12:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.377 --rc genhtml_branch_coverage=1 00:05:22.377 --rc genhtml_function_coverage=1 00:05:22.377 --rc genhtml_legend=1 00:05:22.377 --rc geninfo_all_blocks=1 00:05:22.377 --rc geninfo_unexecuted_blocks=1 00:05:22.377 00:05:22.377 ' 00:05:22.377 21:12:46 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:22.377 OK 00:05:22.377 21:12:46 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:22.377 00:05:22.377 real 0m0.181s 00:05:22.377 user 0m0.121s 00:05:22.377 sys 0m0.070s 00:05:22.377 21:12:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.377 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.377 ************************************ 00:05:22.377 END TEST rpc_client 00:05:22.377 ************************************ 00:05:22.637 21:12:46 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:22.637 21:12:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.637 21:12:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.637 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.637 ************************************ 00:05:22.637 START TEST json_config 00:05:22.637 ************************************ 00:05:22.637 21:12:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:22.637 21:12:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.637 21:12:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.637 21:12:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.637 21:12:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.637 21:12:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.637 21:12:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.637 21:12:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.637 21:12:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.637 21:12:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.637 21:12:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.637 21:12:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.637 21:12:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.637 21:12:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.637 21:12:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.637 21:12:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.637 21:12:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.637 21:12:46 -- scripts/common.sh@344 -- # : 1 00:05:22.637 21:12:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.637 21:12:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.637 21:12:46 -- scripts/common.sh@364 -- # decimal 1 00:05:22.637 21:12:46 -- scripts/common.sh@352 -- # local d=1 00:05:22.637 21:12:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.637 21:12:46 -- scripts/common.sh@354 -- # echo 1 00:05:22.637 21:12:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.637 21:12:46 -- scripts/common.sh@365 -- # decimal 2 00:05:22.637 21:12:46 -- scripts/common.sh@352 -- # local d=2 00:05:22.637 21:12:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.637 21:12:46 -- scripts/common.sh@354 -- # echo 2 00:05:22.637 21:12:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.637 21:12:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.637 21:12:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.637 21:12:46 -- scripts/common.sh@367 -- # return 0 00:05:22.637 21:12:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.637 21:12:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.637 --rc genhtml_branch_coverage=1 00:05:22.637 --rc genhtml_function_coverage=1 00:05:22.637 --rc genhtml_legend=1 00:05:22.637 --rc geninfo_all_blocks=1 00:05:22.637 --rc geninfo_unexecuted_blocks=1 00:05:22.637 00:05:22.637 ' 00:05:22.637 21:12:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.637 --rc genhtml_branch_coverage=1 00:05:22.637 --rc genhtml_function_coverage=1 00:05:22.637 --rc genhtml_legend=1 00:05:22.637 --rc geninfo_all_blocks=1 00:05:22.637 --rc geninfo_unexecuted_blocks=1 00:05:22.637 00:05:22.637 ' 00:05:22.637 21:12:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.637 --rc genhtml_branch_coverage=1 00:05:22.637 --rc genhtml_function_coverage=1 00:05:22.637 --rc genhtml_legend=1 00:05:22.637 --rc geninfo_all_blocks=1 00:05:22.637 --rc geninfo_unexecuted_blocks=1 00:05:22.637 00:05:22.637 ' 00:05:22.637 21:12:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.637 --rc genhtml_branch_coverage=1 00:05:22.637 --rc genhtml_function_coverage=1 00:05:22.637 --rc genhtml_legend=1 00:05:22.637 --rc geninfo_all_blocks=1 00:05:22.637 --rc geninfo_unexecuted_blocks=1 00:05:22.637 00:05:22.637 ' 00:05:22.637 21:12:46 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:22.637 21:12:46 -- nvmf/common.sh@7 -- # uname -s 00:05:22.637 21:12:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.637 21:12:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.637 21:12:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.637 21:12:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.637 21:12:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.637 21:12:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.637 21:12:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.637 21:12:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.637 21:12:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.637 21:12:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.637 21:12:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:05:22.637 21:12:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:05:22.637 21:12:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.637 21:12:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.637 21:12:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.637 21:12:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:22.637 21:12:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.637 21:12:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.637 21:12:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.637 21:12:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.637 21:12:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.637 21:12:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.637 21:12:46 -- paths/export.sh@5 -- # export PATH 00:05:22.638 21:12:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.638 21:12:46 -- nvmf/common.sh@46 -- # : 0 00:05:22.638 21:12:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:22.638 21:12:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:22.638 21:12:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:22.638 21:12:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.638 21:12:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.638 21:12:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:22.638 21:12:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:22.638 21:12:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:22.638 21:12:46 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:22.638 21:12:46 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:22.638 21:12:46 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:22.638 21:12:46 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:22.638 21:12:46 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:22.638 21:12:46 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:22.638 21:12:46 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:22.638 21:12:46 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:22.638 21:12:46 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:22.638 21:12:46 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:22.638 21:12:46 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:22.638 INFO: JSON configuration test init 00:05:22.638 21:12:46 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:22.638 21:12:46 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:22.638 21:12:46 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:22.638 21:12:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.638 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.638 21:12:46 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:22.638 21:12:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.638 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.638 21:12:46 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:22.638 21:12:46 -- json_config/json_config.sh@98 -- # local app=target 00:05:22.638 21:12:46 -- json_config/json_config.sh@99 -- # shift 00:05:22.638 21:12:46 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:22.638 21:12:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:22.638 21:12:46 -- json_config/json_config.sh@111 -- # app_pid[$app]=65835 00:05:22.638 Waiting for target to run... 00:05:22.638 21:12:46 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:22.638 21:12:46 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:22.638 21:12:46 -- json_config/json_config.sh@114 -- # waitforlisten 65835 /var/tmp/spdk_tgt.sock 00:05:22.638 21:12:46 -- common/autotest_common.sh@829 -- # '[' -z 65835 ']' 00:05:22.638 21:12:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.638 21:12:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.638 21:12:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.638 21:12:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.638 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.638 [2024-11-28 21:12:46.370803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:22.638 [2024-11-28 21:12:46.370917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65835 ] 00:05:23.206 [2024-11-28 21:12:46.671815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.206 [2024-11-28 21:12:46.689613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.206 [2024-11-28 21:12:46.689788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.774 21:12:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.774 21:12:47 -- common/autotest_common.sh@862 -- # return 0 00:05:23.774 00:05:23.774 21:12:47 -- json_config/json_config.sh@115 -- # echo '' 00:05:23.774 21:12:47 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:23.774 21:12:47 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:23.774 21:12:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.774 21:12:47 -- common/autotest_common.sh@10 -- # set +x 00:05:23.774 21:12:47 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:23.774 21:12:47 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:23.774 21:12:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.774 21:12:47 -- common/autotest_common.sh@10 -- # set +x 00:05:23.774 21:12:47 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:23.774 21:12:47 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:23.774 21:12:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:24.342 21:12:47 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:24.342 21:12:47 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:24.342 21:12:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.342 21:12:47 -- common/autotest_common.sh@10 -- # set +x 00:05:24.342 21:12:47 -- json_config/json_config.sh@48 -- # local ret=0 00:05:24.342 21:12:47 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:24.342 21:12:47 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:24.342 21:12:47 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:24.342 21:12:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:24.343 21:12:47 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:24.602 21:12:48 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:24.602 21:12:48 -- json_config/json_config.sh@51 -- # local get_types 00:05:24.602 21:12:48 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:24.602 21:12:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.602 21:12:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 21:12:48 -- json_config/json_config.sh@58 -- # return 0 00:05:24.602 21:12:48 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:24.602 21:12:48 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:24.602 21:12:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.602 21:12:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 21:12:48 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:24.602 21:12:48 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:24.602 21:12:48 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.602 21:12:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.861 MallocForNvmf0 00:05:24.861 21:12:48 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.861 21:12:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:25.120 MallocForNvmf1 00:05:25.120 21:12:48 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:25.120 21:12:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:25.120 [2024-11-28 21:12:48.845567] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.120 21:12:48 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.120 21:12:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.687 21:12:49 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:25.687 21:12:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:25.687 21:12:49 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:25.687 21:12:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:25.947 21:12:49 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:25.947 21:12:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:26.208 [2024-11-28 21:12:49.770079] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.208 21:12:49 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:26.208 21:12:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.208 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.208 21:12:49 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:26.208 21:12:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.208 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.208 21:12:49 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:26.208 21:12:49 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.208 21:12:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.467 MallocBdevForConfigChangeCheck 00:05:26.467 21:12:50 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:26.467 21:12:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.467 21:12:50 -- common/autotest_common.sh@10 -- # set +x 00:05:26.467 21:12:50 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:26.467 21:12:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.036 INFO: shutting down applications... 00:05:27.036 21:12:50 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:27.036 21:12:50 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:27.036 21:12:50 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:27.036 21:12:50 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:27.036 21:12:50 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:27.295 Calling clear_iscsi_subsystem 00:05:27.295 Calling clear_nvmf_subsystem 00:05:27.295 Calling clear_nbd_subsystem 00:05:27.295 Calling clear_ublk_subsystem 00:05:27.295 Calling clear_vhost_blk_subsystem 00:05:27.295 Calling clear_vhost_scsi_subsystem 00:05:27.295 Calling clear_scheduler_subsystem 00:05:27.295 Calling clear_bdev_subsystem 00:05:27.295 Calling clear_accel_subsystem 00:05:27.295 Calling clear_vmd_subsystem 00:05:27.295 Calling clear_sock_subsystem 00:05:27.295 Calling clear_iobuf_subsystem 00:05:27.295 21:12:50 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:27.295 21:12:50 -- json_config/json_config.sh@396 -- # count=100 00:05:27.295 21:12:50 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:27.295 21:12:50 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:27.295 21:12:50 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.295 21:12:50 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:27.554 21:12:51 -- json_config/json_config.sh@398 -- # break 00:05:27.554 21:12:51 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:27.554 21:12:51 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:27.554 21:12:51 -- json_config/json_config.sh@120 -- # local app=target 00:05:27.554 21:12:51 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:27.554 21:12:51 -- json_config/json_config.sh@124 -- # [[ -n 65835 ]] 00:05:27.554 21:12:51 -- json_config/json_config.sh@127 -- # kill -SIGINT 65835 00:05:27.554 21:12:51 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:27.554 21:12:51 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:27.554 21:12:51 -- json_config/json_config.sh@130 -- # kill -0 65835 00:05:27.554 21:12:51 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:28.124 21:12:51 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:28.124 21:12:51 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:28.124 21:12:51 -- json_config/json_config.sh@130 -- # kill -0 65835 00:05:28.124 21:12:51 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:28.124 21:12:51 -- json_config/json_config.sh@132 -- # break 00:05:28.124 21:12:51 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:28.124 SPDK target shutdown done 00:05:28.124 21:12:51 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:28.124 INFO: relaunching applications... 00:05:28.124 21:12:51 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:28.124 21:12:51 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.124 21:12:51 -- json_config/json_config.sh@98 -- # local app=target 00:05:28.124 21:12:51 -- json_config/json_config.sh@99 -- # shift 00:05:28.124 21:12:51 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:28.124 21:12:51 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:28.124 21:12:51 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:28.124 21:12:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:28.124 21:12:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:28.124 21:12:51 -- json_config/json_config.sh@111 -- # app_pid[$app]=66020 00:05:28.124 Waiting for target to run... 00:05:28.124 21:12:51 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:28.124 21:12:51 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.124 21:12:51 -- json_config/json_config.sh@114 -- # waitforlisten 66020 /var/tmp/spdk_tgt.sock 00:05:28.124 21:12:51 -- common/autotest_common.sh@829 -- # '[' -z 66020 ']' 00:05:28.124 21:12:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.124 21:12:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.124 21:12:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.124 21:12:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.124 21:12:51 -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 [2024-11-28 21:12:51.794336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:28.124 [2024-11-28 21:12:51.794453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66020 ] 00:05:28.383 [2024-11-28 21:12:52.092041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.383 [2024-11-28 21:12:52.109703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.383 [2024-11-28 21:12:52.109871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.951 [2024-11-28 21:12:52.402317] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.951 [2024-11-28 21:12:52.434427] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.210 21:12:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.210 00:05:29.210 21:12:52 -- common/autotest_common.sh@862 -- # return 0 00:05:29.210 21:12:52 -- json_config/json_config.sh@115 -- # echo '' 00:05:29.210 21:12:52 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:29.210 INFO: Checking if target configuration is the same... 00:05:29.210 21:12:52 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:29.210 21:12:52 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.210 21:12:52 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:29.210 21:12:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.210 + '[' 2 -ne 2 ']' 00:05:29.210 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:29.210 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:29.210 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:29.210 +++ basename /dev/fd/62 00:05:29.210 ++ mktemp /tmp/62.XXX 00:05:29.210 + tmp_file_1=/tmp/62.lBY 00:05:29.210 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.210 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.210 + tmp_file_2=/tmp/spdk_tgt_config.json.nGm 00:05:29.210 + ret=0 00:05:29.210 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:29.485 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:29.485 + diff -u /tmp/62.lBY /tmp/spdk_tgt_config.json.nGm 00:05:29.485 INFO: JSON config files are the same 00:05:29.485 + echo 'INFO: JSON config files are the same' 00:05:29.485 + rm /tmp/62.lBY /tmp/spdk_tgt_config.json.nGm 00:05:29.485 + exit 0 00:05:29.485 21:12:53 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:29.485 INFO: changing configuration and checking if this can be detected... 00:05:29.485 21:12:53 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:29.486 21:12:53 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.486 21:12:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.770 21:12:53 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.770 21:12:53 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:29.770 21:12:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.770 + '[' 2 -ne 2 ']' 00:05:29.770 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:29.770 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:29.770 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:29.770 +++ basename /dev/fd/62 00:05:29.770 ++ mktemp /tmp/62.XXX 00:05:29.770 + tmp_file_1=/tmp/62.Quf 00:05:29.770 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.770 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.770 + tmp_file_2=/tmp/spdk_tgt_config.json.Cws 00:05:29.770 + ret=0 00:05:29.770 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:30.040 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:30.300 + diff -u /tmp/62.Quf /tmp/spdk_tgt_config.json.Cws 00:05:30.300 + ret=1 00:05:30.300 + echo '=== Start of file: /tmp/62.Quf ===' 00:05:30.300 + cat /tmp/62.Quf 00:05:30.300 + echo '=== End of file: /tmp/62.Quf ===' 00:05:30.300 + echo '' 00:05:30.300 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Cws ===' 00:05:30.300 + cat /tmp/spdk_tgt_config.json.Cws 00:05:30.300 + echo '=== End of file: /tmp/spdk_tgt_config.json.Cws ===' 00:05:30.300 + echo '' 00:05:30.300 + rm /tmp/62.Quf /tmp/spdk_tgt_config.json.Cws 00:05:30.300 + exit 1 00:05:30.300 INFO: configuration change detected. 00:05:30.300 21:12:53 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:30.300 21:12:53 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:30.300 21:12:53 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:30.300 21:12:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.300 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:30.300 21:12:53 -- json_config/json_config.sh@360 -- # local ret=0 00:05:30.300 21:12:53 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:30.300 21:12:53 -- json_config/json_config.sh@370 -- # [[ -n 66020 ]] 00:05:30.300 21:12:53 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:30.300 21:12:53 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:30.300 21:12:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.300 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:30.300 21:12:53 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:30.300 21:12:53 -- json_config/json_config.sh@246 -- # uname -s 00:05:30.300 21:12:53 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:30.300 21:12:53 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:30.300 21:12:53 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:30.300 21:12:53 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:30.300 21:12:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.300 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:30.300 21:12:53 -- json_config/json_config.sh@376 -- # killprocess 66020 00:05:30.300 21:12:53 -- common/autotest_common.sh@936 -- # '[' -z 66020 ']' 00:05:30.300 21:12:53 -- common/autotest_common.sh@940 -- # kill -0 66020 00:05:30.300 21:12:53 -- common/autotest_common.sh@941 -- # uname 00:05:30.300 21:12:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.300 21:12:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66020 00:05:30.300 21:12:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.300 killing process with pid 66020 00:05:30.300 21:12:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.300 21:12:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66020' 00:05:30.300 21:12:53 -- common/autotest_common.sh@955 -- # kill 66020 00:05:30.300 21:12:53 -- common/autotest_common.sh@960 -- # wait 66020 00:05:30.560 21:12:54 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.560 21:12:54 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:30.560 21:12:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.560 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:05:30.560 21:12:54 -- json_config/json_config.sh@381 -- # return 0 00:05:30.560 INFO: Success 00:05:30.560 21:12:54 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:30.560 00:05:30.560 real 0m7.956s 00:05:30.560 user 0m11.516s 00:05:30.560 sys 0m1.386s 00:05:30.560 21:12:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.560 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:05:30.560 ************************************ 00:05:30.560 END TEST json_config 00:05:30.560 ************************************ 00:05:30.560 21:12:54 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:30.560 21:12:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.560 21:12:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.560 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:05:30.560 ************************************ 00:05:30.560 START TEST json_config_extra_key 00:05:30.560 ************************************ 00:05:30.560 21:12:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:30.560 21:12:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.560 21:12:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.560 21:12:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.560 21:12:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.560 21:12:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.560 21:12:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.560 21:12:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.560 21:12:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.560 21:12:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.560 21:12:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.560 21:12:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.560 21:12:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.560 21:12:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.560 21:12:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.560 21:12:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.560 21:12:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.560 21:12:54 -- scripts/common.sh@344 -- # : 1 00:05:30.560 21:12:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.560 21:12:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.560 21:12:54 -- scripts/common.sh@364 -- # decimal 1 00:05:30.560 21:12:54 -- scripts/common.sh@352 -- # local d=1 00:05:30.560 21:12:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.560 21:12:54 -- scripts/common.sh@354 -- # echo 1 00:05:30.560 21:12:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.560 21:12:54 -- scripts/common.sh@365 -- # decimal 2 00:05:30.560 21:12:54 -- scripts/common.sh@352 -- # local d=2 00:05:30.560 21:12:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.560 21:12:54 -- scripts/common.sh@354 -- # echo 2 00:05:30.560 21:12:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.560 21:12:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.560 21:12:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.560 21:12:54 -- scripts/common.sh@367 -- # return 0 00:05:30.560 21:12:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.560 21:12:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.560 --rc genhtml_branch_coverage=1 00:05:30.560 --rc genhtml_function_coverage=1 00:05:30.560 --rc genhtml_legend=1 00:05:30.560 --rc geninfo_all_blocks=1 00:05:30.560 --rc geninfo_unexecuted_blocks=1 00:05:30.560 00:05:30.560 ' 00:05:30.560 21:12:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.560 --rc genhtml_branch_coverage=1 00:05:30.560 --rc genhtml_function_coverage=1 00:05:30.560 --rc genhtml_legend=1 00:05:30.560 --rc geninfo_all_blocks=1 00:05:30.560 --rc geninfo_unexecuted_blocks=1 00:05:30.560 00:05:30.560 ' 00:05:30.560 21:12:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.560 --rc genhtml_branch_coverage=1 00:05:30.560 --rc genhtml_function_coverage=1 00:05:30.560 --rc genhtml_legend=1 00:05:30.560 --rc geninfo_all_blocks=1 00:05:30.560 --rc geninfo_unexecuted_blocks=1 00:05:30.560 00:05:30.560 ' 00:05:30.560 21:12:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.560 --rc genhtml_branch_coverage=1 00:05:30.560 --rc genhtml_function_coverage=1 00:05:30.560 --rc genhtml_legend=1 00:05:30.560 --rc geninfo_all_blocks=1 00:05:30.560 --rc geninfo_unexecuted_blocks=1 00:05:30.560 00:05:30.560 ' 00:05:30.560 21:12:54 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:30.560 21:12:54 -- nvmf/common.sh@7 -- # uname -s 00:05:30.820 21:12:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.820 21:12:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.820 21:12:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.820 21:12:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.820 21:12:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.820 21:12:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.820 21:12:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.820 21:12:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.820 21:12:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.820 21:12:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.820 21:12:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:05:30.820 21:12:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:05:30.820 21:12:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.820 21:12:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.820 21:12:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.820 21:12:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:30.820 21:12:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.820 21:12:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.820 21:12:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.820 21:12:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.820 21:12:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.820 21:12:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.820 21:12:54 -- paths/export.sh@5 -- # export PATH 00:05:30.820 21:12:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.820 21:12:54 -- nvmf/common.sh@46 -- # : 0 00:05:30.820 21:12:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:30.820 21:12:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:30.820 21:12:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:30.820 21:12:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.820 21:12:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.820 21:12:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:30.820 21:12:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:30.820 21:12:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.820 INFO: launching applications... 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66173 00:05:30.820 Waiting for target to run... 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66173 /var/tmp/spdk_tgt.sock 00:05:30.820 21:12:54 -- common/autotest_common.sh@829 -- # '[' -z 66173 ']' 00:05:30.820 21:12:54 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:30.820 21:12:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.820 21:12:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.820 21:12:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.820 21:12:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.820 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:05:30.820 [2024-11-28 21:12:54.379494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:30.820 [2024-11-28 21:12:54.379588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66173 ] 00:05:31.079 [2024-11-28 21:12:54.671053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.080 [2024-11-28 21:12:54.689742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.080 [2024-11-28 21:12:54.689914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.014 21:12:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.014 21:12:55 -- common/autotest_common.sh@862 -- # return 0 00:05:32.014 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:32.014 INFO: shutting down applications... 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66173 ]] 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66173 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66173 00:05:32.014 21:12:55 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66173 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:32.271 SPDK target shutdown done 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:32.271 Success 00:05:32.271 21:12:55 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:32.271 00:05:32.271 real 0m1.754s 00:05:32.271 user 0m1.631s 00:05:32.271 sys 0m0.305s 00:05:32.271 21:12:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.271 ************************************ 00:05:32.271 END TEST json_config_extra_key 00:05:32.271 ************************************ 00:05:32.271 21:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:32.271 21:12:55 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.271 21:12:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.271 21:12:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.271 21:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:32.271 ************************************ 00:05:32.271 START TEST alias_rpc 00:05:32.271 ************************************ 00:05:32.271 21:12:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.529 * Looking for test storage... 00:05:32.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:32.529 21:12:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.529 21:12:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:32.529 21:12:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.529 21:12:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:32.529 21:12:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:32.529 21:12:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:32.529 21:12:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:32.529 21:12:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:32.529 21:12:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:32.529 21:12:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.529 21:12:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:32.529 21:12:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:32.529 21:12:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:32.529 21:12:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:32.529 21:12:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:32.529 21:12:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:32.529 21:12:56 -- scripts/common.sh@344 -- # : 1 00:05:32.529 21:12:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:32.530 21:12:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.530 21:12:56 -- scripts/common.sh@364 -- # decimal 1 00:05:32.530 21:12:56 -- scripts/common.sh@352 -- # local d=1 00:05:32.530 21:12:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.530 21:12:56 -- scripts/common.sh@354 -- # echo 1 00:05:32.530 21:12:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.530 21:12:56 -- scripts/common.sh@365 -- # decimal 2 00:05:32.530 21:12:56 -- scripts/common.sh@352 -- # local d=2 00:05:32.530 21:12:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.530 21:12:56 -- scripts/common.sh@354 -- # echo 2 00:05:32.530 21:12:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.530 21:12:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.530 21:12:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.530 21:12:56 -- scripts/common.sh@367 -- # return 0 00:05:32.530 21:12:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.530 21:12:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.530 --rc genhtml_branch_coverage=1 00:05:32.530 --rc genhtml_function_coverage=1 00:05:32.530 --rc genhtml_legend=1 00:05:32.530 --rc geninfo_all_blocks=1 00:05:32.530 --rc geninfo_unexecuted_blocks=1 00:05:32.530 00:05:32.530 ' 00:05:32.530 21:12:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.530 --rc genhtml_branch_coverage=1 00:05:32.530 --rc genhtml_function_coverage=1 00:05:32.530 --rc genhtml_legend=1 00:05:32.530 --rc geninfo_all_blocks=1 00:05:32.530 --rc geninfo_unexecuted_blocks=1 00:05:32.530 00:05:32.530 ' 00:05:32.530 21:12:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.530 --rc genhtml_branch_coverage=1 00:05:32.530 --rc genhtml_function_coverage=1 00:05:32.530 --rc genhtml_legend=1 00:05:32.530 --rc geninfo_all_blocks=1 00:05:32.530 --rc geninfo_unexecuted_blocks=1 00:05:32.530 00:05:32.530 ' 00:05:32.530 21:12:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.530 --rc genhtml_branch_coverage=1 00:05:32.530 --rc genhtml_function_coverage=1 00:05:32.530 --rc genhtml_legend=1 00:05:32.530 --rc geninfo_all_blocks=1 00:05:32.530 --rc geninfo_unexecuted_blocks=1 00:05:32.530 00:05:32.530 ' 00:05:32.530 21:12:56 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.530 21:12:56 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66239 00:05:32.530 21:12:56 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66239 00:05:32.530 21:12:56 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.530 21:12:56 -- common/autotest_common.sh@829 -- # '[' -z 66239 ']' 00:05:32.530 21:12:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.530 21:12:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.530 21:12:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.530 21:12:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.530 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:05:32.530 [2024-11-28 21:12:56.195458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:32.530 [2024-11-28 21:12:56.195548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66239 ] 00:05:32.789 [2024-11-28 21:12:56.332937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.789 [2024-11-28 21:12:56.368614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.789 [2024-11-28 21:12:56.368763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.723 21:12:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.723 21:12:57 -- common/autotest_common.sh@862 -- # return 0 00:05:33.723 21:12:57 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:33.982 21:12:57 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66239 00:05:33.982 21:12:57 -- common/autotest_common.sh@936 -- # '[' -z 66239 ']' 00:05:33.982 21:12:57 -- common/autotest_common.sh@940 -- # kill -0 66239 00:05:33.982 21:12:57 -- common/autotest_common.sh@941 -- # uname 00:05:33.982 21:12:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.982 21:12:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66239 00:05:33.982 21:12:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.982 21:12:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.982 killing process with pid 66239 00:05:33.982 21:12:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66239' 00:05:33.982 21:12:57 -- common/autotest_common.sh@955 -- # kill 66239 00:05:33.982 21:12:57 -- common/autotest_common.sh@960 -- # wait 66239 00:05:34.240 00:05:34.240 real 0m1.832s 00:05:34.240 user 0m2.229s 00:05:34.240 sys 0m0.360s 00:05:34.240 21:12:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.240 21:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 ************************************ 00:05:34.240 END TEST alias_rpc 00:05:34.240 ************************************ 00:05:34.240 21:12:57 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:34.241 21:12:57 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.241 21:12:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.241 21:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.241 21:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 ************************************ 00:05:34.241 START TEST spdkcli_tcp 00:05:34.241 ************************************ 00:05:34.241 21:12:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.241 * Looking for test storage... 00:05:34.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:34.241 21:12:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.241 21:12:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.241 21:12:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.500 21:12:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.500 21:12:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.500 21:12:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.500 21:12:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.500 21:12:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.500 21:12:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.500 21:12:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.500 21:12:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.500 21:12:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.500 21:12:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.500 21:12:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.500 21:12:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.500 21:12:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.500 21:12:58 -- scripts/common.sh@344 -- # : 1 00:05:34.500 21:12:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.500 21:12:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.500 21:12:58 -- scripts/common.sh@364 -- # decimal 1 00:05:34.501 21:12:58 -- scripts/common.sh@352 -- # local d=1 00:05:34.501 21:12:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.501 21:12:58 -- scripts/common.sh@354 -- # echo 1 00:05:34.501 21:12:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.501 21:12:58 -- scripts/common.sh@365 -- # decimal 2 00:05:34.501 21:12:58 -- scripts/common.sh@352 -- # local d=2 00:05:34.501 21:12:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.501 21:12:58 -- scripts/common.sh@354 -- # echo 2 00:05:34.501 21:12:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.501 21:12:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.501 21:12:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.501 21:12:58 -- scripts/common.sh@367 -- # return 0 00:05:34.501 21:12:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.501 21:12:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 21:12:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 21:12:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 21:12:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:34.501 21:12:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:34.501 21:12:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.501 21:12:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.501 21:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66322 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@27 -- # waitforlisten 66322 00:05:34.501 21:12:58 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.501 21:12:58 -- common/autotest_common.sh@829 -- # '[' -z 66322 ']' 00:05:34.501 21:12:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.501 21:12:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.501 21:12:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.501 21:12:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.501 21:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 [2024-11-28 21:12:58.091101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:34.501 [2024-11-28 21:12:58.091201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66322 ] 00:05:34.501 [2024-11-28 21:12:58.225859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.760 [2024-11-28 21:12:58.262290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.760 [2024-11-28 21:12:58.262624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.760 [2024-11-28 21:12:58.262684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.698 21:12:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.698 21:12:59 -- common/autotest_common.sh@862 -- # return 0 00:05:35.698 21:12:59 -- spdkcli/tcp.sh@31 -- # socat_pid=66339 00:05:35.698 21:12:59 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.698 21:12:59 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.698 [ 00:05:35.698 "bdev_malloc_delete", 00:05:35.698 "bdev_malloc_create", 00:05:35.698 "bdev_null_resize", 00:05:35.698 "bdev_null_delete", 00:05:35.698 "bdev_null_create", 00:05:35.698 "bdev_nvme_cuse_unregister", 00:05:35.698 "bdev_nvme_cuse_register", 00:05:35.698 "bdev_opal_new_user", 00:05:35.698 "bdev_opal_set_lock_state", 00:05:35.698 "bdev_opal_delete", 00:05:35.698 "bdev_opal_get_info", 00:05:35.698 "bdev_opal_create", 00:05:35.698 "bdev_nvme_opal_revert", 00:05:35.698 "bdev_nvme_opal_init", 00:05:35.698 "bdev_nvme_send_cmd", 00:05:35.698 "bdev_nvme_get_path_iostat", 00:05:35.698 "bdev_nvme_get_mdns_discovery_info", 00:05:35.698 "bdev_nvme_stop_mdns_discovery", 00:05:35.698 "bdev_nvme_start_mdns_discovery", 00:05:35.698 "bdev_nvme_set_multipath_policy", 00:05:35.698 "bdev_nvme_set_preferred_path", 00:05:35.698 "bdev_nvme_get_io_paths", 00:05:35.698 "bdev_nvme_remove_error_injection", 00:05:35.698 "bdev_nvme_add_error_injection", 00:05:35.698 "bdev_nvme_get_discovery_info", 00:05:35.698 "bdev_nvme_stop_discovery", 00:05:35.698 "bdev_nvme_start_discovery", 00:05:35.698 "bdev_nvme_get_controller_health_info", 00:05:35.698 "bdev_nvme_disable_controller", 00:05:35.698 "bdev_nvme_enable_controller", 00:05:35.698 "bdev_nvme_reset_controller", 00:05:35.698 "bdev_nvme_get_transport_statistics", 00:05:35.698 "bdev_nvme_apply_firmware", 00:05:35.698 "bdev_nvme_detach_controller", 00:05:35.698 "bdev_nvme_get_controllers", 00:05:35.698 "bdev_nvme_attach_controller", 00:05:35.698 "bdev_nvme_set_hotplug", 00:05:35.698 "bdev_nvme_set_options", 00:05:35.698 "bdev_passthru_delete", 00:05:35.698 "bdev_passthru_create", 00:05:35.698 "bdev_lvol_grow_lvstore", 00:05:35.698 "bdev_lvol_get_lvols", 00:05:35.698 "bdev_lvol_get_lvstores", 00:05:35.698 "bdev_lvol_delete", 00:05:35.698 "bdev_lvol_set_read_only", 00:05:35.698 "bdev_lvol_resize", 00:05:35.698 "bdev_lvol_decouple_parent", 00:05:35.698 "bdev_lvol_inflate", 00:05:35.698 "bdev_lvol_rename", 00:05:35.698 "bdev_lvol_clone_bdev", 00:05:35.698 "bdev_lvol_clone", 00:05:35.698 "bdev_lvol_snapshot", 00:05:35.698 "bdev_lvol_create", 00:05:35.698 "bdev_lvol_delete_lvstore", 00:05:35.698 "bdev_lvol_rename_lvstore", 00:05:35.698 "bdev_lvol_create_lvstore", 00:05:35.698 "bdev_raid_set_options", 00:05:35.698 "bdev_raid_remove_base_bdev", 00:05:35.698 "bdev_raid_add_base_bdev", 00:05:35.698 "bdev_raid_delete", 00:05:35.698 "bdev_raid_create", 00:05:35.698 "bdev_raid_get_bdevs", 00:05:35.698 "bdev_error_inject_error", 00:05:35.698 "bdev_error_delete", 00:05:35.698 "bdev_error_create", 00:05:35.698 "bdev_split_delete", 00:05:35.698 "bdev_split_create", 00:05:35.698 "bdev_delay_delete", 00:05:35.698 "bdev_delay_create", 00:05:35.698 "bdev_delay_update_latency", 00:05:35.698 "bdev_zone_block_delete", 00:05:35.698 "bdev_zone_block_create", 00:05:35.698 "blobfs_create", 00:05:35.698 "blobfs_detect", 00:05:35.698 "blobfs_set_cache_size", 00:05:35.698 "bdev_aio_delete", 00:05:35.698 "bdev_aio_rescan", 00:05:35.698 "bdev_aio_create", 00:05:35.698 "bdev_ftl_set_property", 00:05:35.698 "bdev_ftl_get_properties", 00:05:35.698 "bdev_ftl_get_stats", 00:05:35.698 "bdev_ftl_unmap", 00:05:35.698 "bdev_ftl_unload", 00:05:35.698 "bdev_ftl_delete", 00:05:35.698 "bdev_ftl_load", 00:05:35.698 "bdev_ftl_create", 00:05:35.698 "bdev_virtio_attach_controller", 00:05:35.698 "bdev_virtio_scsi_get_devices", 00:05:35.698 "bdev_virtio_detach_controller", 00:05:35.698 "bdev_virtio_blk_set_hotplug", 00:05:35.698 "bdev_iscsi_delete", 00:05:35.698 "bdev_iscsi_create", 00:05:35.698 "bdev_iscsi_set_options", 00:05:35.698 "bdev_uring_delete", 00:05:35.698 "bdev_uring_create", 00:05:35.698 "accel_error_inject_error", 00:05:35.698 "ioat_scan_accel_module", 00:05:35.698 "dsa_scan_accel_module", 00:05:35.698 "iaa_scan_accel_module", 00:05:35.698 "iscsi_set_options", 00:05:35.698 "iscsi_get_auth_groups", 00:05:35.698 "iscsi_auth_group_remove_secret", 00:05:35.698 "iscsi_auth_group_add_secret", 00:05:35.698 "iscsi_delete_auth_group", 00:05:35.698 "iscsi_create_auth_group", 00:05:35.698 "iscsi_set_discovery_auth", 00:05:35.698 "iscsi_get_options", 00:05:35.698 "iscsi_target_node_request_logout", 00:05:35.698 "iscsi_target_node_set_redirect", 00:05:35.698 "iscsi_target_node_set_auth", 00:05:35.698 "iscsi_target_node_add_lun", 00:05:35.698 "iscsi_get_connections", 00:05:35.698 "iscsi_portal_group_set_auth", 00:05:35.698 "iscsi_start_portal_group", 00:05:35.698 "iscsi_delete_portal_group", 00:05:35.698 "iscsi_create_portal_group", 00:05:35.698 "iscsi_get_portal_groups", 00:05:35.698 "iscsi_delete_target_node", 00:05:35.698 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.698 "iscsi_target_node_add_pg_ig_maps", 00:05:35.698 "iscsi_create_target_node", 00:05:35.698 "iscsi_get_target_nodes", 00:05:35.698 "iscsi_delete_initiator_group", 00:05:35.698 "iscsi_initiator_group_remove_initiators", 00:05:35.698 "iscsi_initiator_group_add_initiators", 00:05:35.698 "iscsi_create_initiator_group", 00:05:35.698 "iscsi_get_initiator_groups", 00:05:35.698 "nvmf_set_crdt", 00:05:35.698 "nvmf_set_config", 00:05:35.698 "nvmf_set_max_subsystems", 00:05:35.698 "nvmf_subsystem_get_listeners", 00:05:35.698 "nvmf_subsystem_get_qpairs", 00:05:35.698 "nvmf_subsystem_get_controllers", 00:05:35.698 "nvmf_get_stats", 00:05:35.698 "nvmf_get_transports", 00:05:35.698 "nvmf_create_transport", 00:05:35.698 "nvmf_get_targets", 00:05:35.698 "nvmf_delete_target", 00:05:35.698 "nvmf_create_target", 00:05:35.698 "nvmf_subsystem_allow_any_host", 00:05:35.698 "nvmf_subsystem_remove_host", 00:05:35.698 "nvmf_subsystem_add_host", 00:05:35.698 "nvmf_subsystem_remove_ns", 00:05:35.698 "nvmf_subsystem_add_ns", 00:05:35.698 "nvmf_subsystem_listener_set_ana_state", 00:05:35.698 "nvmf_discovery_get_referrals", 00:05:35.698 "nvmf_discovery_remove_referral", 00:05:35.698 "nvmf_discovery_add_referral", 00:05:35.698 "nvmf_subsystem_remove_listener", 00:05:35.698 "nvmf_subsystem_add_listener", 00:05:35.698 "nvmf_delete_subsystem", 00:05:35.698 "nvmf_create_subsystem", 00:05:35.698 "nvmf_get_subsystems", 00:05:35.698 "env_dpdk_get_mem_stats", 00:05:35.698 "nbd_get_disks", 00:05:35.698 "nbd_stop_disk", 00:05:35.698 "nbd_start_disk", 00:05:35.698 "ublk_recover_disk", 00:05:35.698 "ublk_get_disks", 00:05:35.698 "ublk_stop_disk", 00:05:35.698 "ublk_start_disk", 00:05:35.698 "ublk_destroy_target", 00:05:35.698 "ublk_create_target", 00:05:35.698 "virtio_blk_create_transport", 00:05:35.698 "virtio_blk_get_transports", 00:05:35.698 "vhost_controller_set_coalescing", 00:05:35.698 "vhost_get_controllers", 00:05:35.698 "vhost_delete_controller", 00:05:35.699 "vhost_create_blk_controller", 00:05:35.699 "vhost_scsi_controller_remove_target", 00:05:35.699 "vhost_scsi_controller_add_target", 00:05:35.699 "vhost_start_scsi_controller", 00:05:35.699 "vhost_create_scsi_controller", 00:05:35.699 "thread_set_cpumask", 00:05:35.699 "framework_get_scheduler", 00:05:35.699 "framework_set_scheduler", 00:05:35.699 "framework_get_reactors", 00:05:35.699 "thread_get_io_channels", 00:05:35.699 "thread_get_pollers", 00:05:35.699 "thread_get_stats", 00:05:35.699 "framework_monitor_context_switch", 00:05:35.699 "spdk_kill_instance", 00:05:35.699 "log_enable_timestamps", 00:05:35.699 "log_get_flags", 00:05:35.699 "log_clear_flag", 00:05:35.699 "log_set_flag", 00:05:35.699 "log_get_level", 00:05:35.699 "log_set_level", 00:05:35.699 "log_get_print_level", 00:05:35.699 "log_set_print_level", 00:05:35.699 "framework_enable_cpumask_locks", 00:05:35.699 "framework_disable_cpumask_locks", 00:05:35.699 "framework_wait_init", 00:05:35.699 "framework_start_init", 00:05:35.699 "scsi_get_devices", 00:05:35.699 "bdev_get_histogram", 00:05:35.699 "bdev_enable_histogram", 00:05:35.699 "bdev_set_qos_limit", 00:05:35.699 "bdev_set_qd_sampling_period", 00:05:35.699 "bdev_get_bdevs", 00:05:35.699 "bdev_reset_iostat", 00:05:35.699 "bdev_get_iostat", 00:05:35.699 "bdev_examine", 00:05:35.699 "bdev_wait_for_examine", 00:05:35.699 "bdev_set_options", 00:05:35.699 "notify_get_notifications", 00:05:35.699 "notify_get_types", 00:05:35.699 "accel_get_stats", 00:05:35.699 "accel_set_options", 00:05:35.699 "accel_set_driver", 00:05:35.699 "accel_crypto_key_destroy", 00:05:35.699 "accel_crypto_keys_get", 00:05:35.699 "accel_crypto_key_create", 00:05:35.699 "accel_assign_opc", 00:05:35.699 "accel_get_module_info", 00:05:35.699 "accel_get_opc_assignments", 00:05:35.699 "vmd_rescan", 00:05:35.699 "vmd_remove_device", 00:05:35.699 "vmd_enable", 00:05:35.699 "sock_set_default_impl", 00:05:35.699 "sock_impl_set_options", 00:05:35.699 "sock_impl_get_options", 00:05:35.699 "iobuf_get_stats", 00:05:35.699 "iobuf_set_options", 00:05:35.699 "framework_get_pci_devices", 00:05:35.699 "framework_get_config", 00:05:35.699 "framework_get_subsystems", 00:05:35.699 "trace_get_info", 00:05:35.699 "trace_get_tpoint_group_mask", 00:05:35.699 "trace_disable_tpoint_group", 00:05:35.699 "trace_enable_tpoint_group", 00:05:35.699 "trace_clear_tpoint_mask", 00:05:35.699 "trace_set_tpoint_mask", 00:05:35.699 "spdk_get_version", 00:05:35.699 "rpc_get_methods" 00:05:35.699 ] 00:05:35.699 21:12:59 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.699 21:12:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.699 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.699 21:12:59 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.699 21:12:59 -- spdkcli/tcp.sh@38 -- # killprocess 66322 00:05:35.699 21:12:59 -- common/autotest_common.sh@936 -- # '[' -z 66322 ']' 00:05:35.699 21:12:59 -- common/autotest_common.sh@940 -- # kill -0 66322 00:05:35.699 21:12:59 -- common/autotest_common.sh@941 -- # uname 00:05:35.699 21:12:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.699 21:12:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66322 00:05:35.699 killing process with pid 66322 00:05:35.699 21:12:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.699 21:12:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.699 21:12:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66322' 00:05:35.699 21:12:59 -- common/autotest_common.sh@955 -- # kill 66322 00:05:35.699 21:12:59 -- common/autotest_common.sh@960 -- # wait 66322 00:05:35.958 ************************************ 00:05:35.958 END TEST spdkcli_tcp 00:05:35.958 ************************************ 00:05:35.958 00:05:35.958 real 0m1.773s 00:05:35.958 user 0m3.383s 00:05:35.958 sys 0m0.378s 00:05:35.958 21:12:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.958 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.958 21:12:59 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.958 21:12:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.958 21:12:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.958 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.958 ************************************ 00:05:35.958 START TEST dpdk_mem_utility 00:05:35.958 ************************************ 00:05:35.958 21:12:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.218 * Looking for test storage... 00:05:36.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:36.218 21:12:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.218 21:12:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.218 21:12:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.218 21:12:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.218 21:12:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.218 21:12:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.218 21:12:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.218 21:12:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.218 21:12:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.218 21:12:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.218 21:12:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.218 21:12:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.218 21:12:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.218 21:12:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.218 21:12:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.218 21:12:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.218 21:12:59 -- scripts/common.sh@344 -- # : 1 00:05:36.218 21:12:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.218 21:12:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.218 21:12:59 -- scripts/common.sh@364 -- # decimal 1 00:05:36.218 21:12:59 -- scripts/common.sh@352 -- # local d=1 00:05:36.218 21:12:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.218 21:12:59 -- scripts/common.sh@354 -- # echo 1 00:05:36.218 21:12:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.218 21:12:59 -- scripts/common.sh@365 -- # decimal 2 00:05:36.218 21:12:59 -- scripts/common.sh@352 -- # local d=2 00:05:36.218 21:12:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.218 21:12:59 -- scripts/common.sh@354 -- # echo 2 00:05:36.218 21:12:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.218 21:12:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.218 21:12:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.218 21:12:59 -- scripts/common.sh@367 -- # return 0 00:05:36.218 21:12:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.218 21:12:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.218 --rc genhtml_branch_coverage=1 00:05:36.218 --rc genhtml_function_coverage=1 00:05:36.218 --rc genhtml_legend=1 00:05:36.218 --rc geninfo_all_blocks=1 00:05:36.218 --rc geninfo_unexecuted_blocks=1 00:05:36.218 00:05:36.218 ' 00:05:36.218 21:12:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.218 --rc genhtml_branch_coverage=1 00:05:36.218 --rc genhtml_function_coverage=1 00:05:36.218 --rc genhtml_legend=1 00:05:36.218 --rc geninfo_all_blocks=1 00:05:36.218 --rc geninfo_unexecuted_blocks=1 00:05:36.218 00:05:36.218 ' 00:05:36.218 21:12:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.218 --rc genhtml_branch_coverage=1 00:05:36.218 --rc genhtml_function_coverage=1 00:05:36.218 --rc genhtml_legend=1 00:05:36.218 --rc geninfo_all_blocks=1 00:05:36.218 --rc geninfo_unexecuted_blocks=1 00:05:36.218 00:05:36.218 ' 00:05:36.218 21:12:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.218 --rc genhtml_branch_coverage=1 00:05:36.218 --rc genhtml_function_coverage=1 00:05:36.218 --rc genhtml_legend=1 00:05:36.218 --rc geninfo_all_blocks=1 00:05:36.218 --rc geninfo_unexecuted_blocks=1 00:05:36.218 00:05:36.218 ' 00:05:36.218 21:12:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:36.218 21:12:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66420 00:05:36.218 21:12:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66420 00:05:36.218 21:12:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.218 21:12:59 -- common/autotest_common.sh@829 -- # '[' -z 66420 ']' 00:05:36.218 21:12:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.218 21:12:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.218 21:12:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.218 21:12:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.218 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.218 [2024-11-28 21:12:59.903944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:36.218 [2024-11-28 21:12:59.904266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66420 ] 00:05:36.478 [2024-11-28 21:13:00.039790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.478 [2024-11-28 21:13:00.077690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.478 [2024-11-28 21:13:00.078116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.417 21:13:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.417 21:13:00 -- common/autotest_common.sh@862 -- # return 0 00:05:37.417 21:13:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.417 21:13:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.417 21:13:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.417 21:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.417 { 00:05:37.417 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.417 } 00:05:37.417 21:13:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.417 21:13:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.417 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:37.417 1 heaps totaling size 814.000000 MiB 00:05:37.417 size: 814.000000 MiB heap id: 0 00:05:37.417 end heaps---------- 00:05:37.417 8 mempools totaling size 598.116089 MiB 00:05:37.417 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.417 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.417 size: 84.521057 MiB name: bdev_io_66420 00:05:37.417 size: 51.011292 MiB name: evtpool_66420 00:05:37.417 size: 50.003479 MiB name: msgpool_66420 00:05:37.417 size: 21.763794 MiB name: PDU_Pool 00:05:37.417 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.417 size: 0.026123 MiB name: Session_Pool 00:05:37.417 end mempools------- 00:05:37.417 6 memzones totaling size 4.142822 MiB 00:05:37.417 size: 1.000366 MiB name: RG_ring_0_66420 00:05:37.417 size: 1.000366 MiB name: RG_ring_1_66420 00:05:37.417 size: 1.000366 MiB name: RG_ring_4_66420 00:05:37.417 size: 1.000366 MiB name: RG_ring_5_66420 00:05:37.417 size: 0.125366 MiB name: RG_ring_2_66420 00:05:37.417 size: 0.015991 MiB name: RG_ring_3_66420 00:05:37.417 end memzones------- 00:05:37.417 21:13:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.417 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:05:37.417 list of free elements. size: 12.471375 MiB 00:05:37.417 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:37.417 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:37.417 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:37.417 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:37.417 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:37.417 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:37.417 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:37.417 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:37.417 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:37.417 element at address: 0x20001aa00000 with size: 0.569153 MiB 00:05:37.417 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:37.417 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:37.417 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:37.417 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:37.417 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:37.417 list of standard malloc elements. size: 199.266052 MiB 00:05:37.417 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:37.417 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:37.417 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.417 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:37.417 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.417 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.417 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:37.417 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.417 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:37.417 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:37.417 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:37.418 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:37.419 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:37.419 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:37.419 list of memzone associated elements. size: 602.262573 MiB 00:05:37.419 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:37.419 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.419 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:37.419 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.419 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:37.419 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66420_0 00:05:37.419 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:37.419 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66420_0 00:05:37.419 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:37.419 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66420_0 00:05:37.419 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:37.419 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.419 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:37.419 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.419 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:37.419 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66420 00:05:37.419 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:37.419 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66420 00:05:37.419 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.419 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66420 00:05:37.419 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:37.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.419 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:37.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.419 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:37.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.419 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:37.419 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.419 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:37.419 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66420 00:05:37.419 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:37.419 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66420 00:05:37.419 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:37.419 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66420 00:05:37.419 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:37.419 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66420 00:05:37.419 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:37.419 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66420 00:05:37.419 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:37.419 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.419 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:37.419 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.419 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:37.419 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.420 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:37.420 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66420 00:05:37.420 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:37.420 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.420 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:37.420 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.420 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:37.420 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66420 00:05:37.420 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:37.420 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.420 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:37.420 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66420 00:05:37.420 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:37.420 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66420 00:05:37.420 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:37.420 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.420 21:13:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.420 21:13:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66420 00:05:37.420 21:13:01 -- common/autotest_common.sh@936 -- # '[' -z 66420 ']' 00:05:37.420 21:13:01 -- common/autotest_common.sh@940 -- # kill -0 66420 00:05:37.420 21:13:01 -- common/autotest_common.sh@941 -- # uname 00:05:37.420 21:13:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.420 21:13:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66420 00:05:37.420 killing process with pid 66420 00:05:37.420 21:13:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.420 21:13:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.420 21:13:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66420' 00:05:37.420 21:13:01 -- common/autotest_common.sh@955 -- # kill 66420 00:05:37.420 21:13:01 -- common/autotest_common.sh@960 -- # wait 66420 00:05:37.679 00:05:37.679 real 0m1.696s 00:05:37.679 user 0m2.021s 00:05:37.679 sys 0m0.328s 00:05:37.679 21:13:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.679 ************************************ 00:05:37.679 END TEST dpdk_mem_utility 00:05:37.679 ************************************ 00:05:37.679 21:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.679 21:13:01 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:37.679 21:13:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.679 21:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.679 21:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.679 ************************************ 00:05:37.679 START TEST event 00:05:37.679 ************************************ 00:05:37.679 21:13:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:37.939 * Looking for test storage... 00:05:37.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:37.939 21:13:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.939 21:13:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.939 21:13:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.939 21:13:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.939 21:13:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.939 21:13:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.939 21:13:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.939 21:13:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.939 21:13:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.939 21:13:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.939 21:13:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.939 21:13:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.939 21:13:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.939 21:13:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.939 21:13:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.939 21:13:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.939 21:13:01 -- scripts/common.sh@344 -- # : 1 00:05:37.939 21:13:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.939 21:13:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.939 21:13:01 -- scripts/common.sh@364 -- # decimal 1 00:05:37.939 21:13:01 -- scripts/common.sh@352 -- # local d=1 00:05:37.939 21:13:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.939 21:13:01 -- scripts/common.sh@354 -- # echo 1 00:05:37.939 21:13:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.939 21:13:01 -- scripts/common.sh@365 -- # decimal 2 00:05:37.939 21:13:01 -- scripts/common.sh@352 -- # local d=2 00:05:37.939 21:13:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.939 21:13:01 -- scripts/common.sh@354 -- # echo 2 00:05:37.939 21:13:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.939 21:13:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.939 21:13:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.939 21:13:01 -- scripts/common.sh@367 -- # return 0 00:05:37.939 21:13:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.939 21:13:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.939 --rc genhtml_branch_coverage=1 00:05:37.939 --rc genhtml_function_coverage=1 00:05:37.939 --rc genhtml_legend=1 00:05:37.939 --rc geninfo_all_blocks=1 00:05:37.939 --rc geninfo_unexecuted_blocks=1 00:05:37.939 00:05:37.939 ' 00:05:37.939 21:13:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.939 --rc genhtml_branch_coverage=1 00:05:37.939 --rc genhtml_function_coverage=1 00:05:37.939 --rc genhtml_legend=1 00:05:37.939 --rc geninfo_all_blocks=1 00:05:37.939 --rc geninfo_unexecuted_blocks=1 00:05:37.939 00:05:37.939 ' 00:05:37.939 21:13:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.939 --rc genhtml_branch_coverage=1 00:05:37.939 --rc genhtml_function_coverage=1 00:05:37.939 --rc genhtml_legend=1 00:05:37.939 --rc geninfo_all_blocks=1 00:05:37.939 --rc geninfo_unexecuted_blocks=1 00:05:37.939 00:05:37.939 ' 00:05:37.939 21:13:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.939 --rc genhtml_branch_coverage=1 00:05:37.939 --rc genhtml_function_coverage=1 00:05:37.939 --rc genhtml_legend=1 00:05:37.939 --rc geninfo_all_blocks=1 00:05:37.939 --rc geninfo_unexecuted_blocks=1 00:05:37.939 00:05:37.939 ' 00:05:37.939 21:13:01 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:37.939 21:13:01 -- bdev/nbd_common.sh@6 -- # set -e 00:05:37.939 21:13:01 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.939 21:13:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:37.939 21:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.939 21:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.939 ************************************ 00:05:37.939 START TEST event_perf 00:05:37.939 ************************************ 00:05:37.939 21:13:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.939 Running I/O for 1 seconds...[2024-11-28 21:13:01.636474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:37.939 [2024-11-28 21:13:01.636695] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66504 ] 00:05:38.198 [2024-11-28 21:13:01.774634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.198 [2024-11-28 21:13:01.811799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.198 [2024-11-28 21:13:01.811895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.198 [2024-11-28 21:13:01.811980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.198 [2024-11-28 21:13:01.811983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.135 Running I/O for 1 seconds... 00:05:39.135 lcore 0: 188064 00:05:39.135 lcore 1: 188064 00:05:39.135 lcore 2: 188064 00:05:39.135 lcore 3: 188065 00:05:39.135 done. 00:05:39.135 00:05:39.135 real 0m1.249s 00:05:39.135 user 0m4.074s 00:05:39.135 sys 0m0.052s 00:05:39.135 21:13:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.135 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.135 ************************************ 00:05:39.135 END TEST event_perf 00:05:39.135 ************************************ 00:05:39.395 21:13:02 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:39.395 21:13:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:39.395 21:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.395 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.395 ************************************ 00:05:39.395 START TEST event_reactor 00:05:39.395 ************************************ 00:05:39.395 21:13:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:39.395 [2024-11-28 21:13:02.942221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:39.395 [2024-11-28 21:13:02.942297] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66537 ] 00:05:39.395 [2024-11-28 21:13:03.072118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.395 [2024-11-28 21:13:03.110778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.774 test_start 00:05:40.774 oneshot 00:05:40.774 tick 100 00:05:40.774 tick 100 00:05:40.774 tick 250 00:05:40.774 tick 100 00:05:40.774 tick 100 00:05:40.774 tick 100 00:05:40.774 tick 250 00:05:40.774 tick 500 00:05:40.774 tick 100 00:05:40.774 tick 100 00:05:40.774 tick 250 00:05:40.774 tick 100 00:05:40.774 tick 100 00:05:40.774 test_end 00:05:40.774 00:05:40.774 real 0m1.240s 00:05:40.774 user 0m1.095s 00:05:40.774 sys 0m0.040s 00:05:40.774 21:13:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.774 ************************************ 00:05:40.774 END TEST event_reactor 00:05:40.774 ************************************ 00:05:40.774 21:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 21:13:04 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.774 21:13:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:40.774 21:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.774 21:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 ************************************ 00:05:40.774 START TEST event_reactor_perf 00:05:40.774 ************************************ 00:05:40.774 21:13:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.774 [2024-11-28 21:13:04.235835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:40.774 [2024-11-28 21:13:04.235915] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66567 ] 00:05:40.774 [2024-11-28 21:13:04.365557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.774 [2024-11-28 21:13:04.403874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.153 test_start 00:05:42.153 test_end 00:05:42.153 Performance: 395387 events per second 00:05:42.153 ************************************ 00:05:42.153 END TEST event_reactor_perf 00:05:42.153 ************************************ 00:05:42.153 00:05:42.153 real 0m1.255s 00:05:42.153 user 0m1.110s 00:05:42.153 sys 0m0.039s 00:05:42.153 21:13:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.153 21:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.153 21:13:05 -- event/event.sh@49 -- # uname -s 00:05:42.153 21:13:05 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.153 21:13:05 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:42.153 21:13:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.153 21:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.153 21:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.153 ************************************ 00:05:42.153 START TEST event_scheduler 00:05:42.153 ************************************ 00:05:42.153 21:13:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:42.153 * Looking for test storage... 00:05:42.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:42.153 21:13:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.153 21:13:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.153 21:13:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.153 21:13:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.153 21:13:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.153 21:13:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.153 21:13:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.153 21:13:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.153 21:13:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.153 21:13:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.153 21:13:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.153 21:13:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.153 21:13:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.153 21:13:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.153 21:13:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.153 21:13:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.153 21:13:05 -- scripts/common.sh@344 -- # : 1 00:05:42.153 21:13:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.153 21:13:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.153 21:13:05 -- scripts/common.sh@364 -- # decimal 1 00:05:42.153 21:13:05 -- scripts/common.sh@352 -- # local d=1 00:05:42.153 21:13:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.153 21:13:05 -- scripts/common.sh@354 -- # echo 1 00:05:42.153 21:13:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.153 21:13:05 -- scripts/common.sh@365 -- # decimal 2 00:05:42.153 21:13:05 -- scripts/common.sh@352 -- # local d=2 00:05:42.153 21:13:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.153 21:13:05 -- scripts/common.sh@354 -- # echo 2 00:05:42.153 21:13:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.153 21:13:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.153 21:13:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.153 21:13:05 -- scripts/common.sh@367 -- # return 0 00:05:42.153 21:13:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.153 21:13:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.153 --rc genhtml_branch_coverage=1 00:05:42.153 --rc genhtml_function_coverage=1 00:05:42.153 --rc genhtml_legend=1 00:05:42.153 --rc geninfo_all_blocks=1 00:05:42.153 --rc geninfo_unexecuted_blocks=1 00:05:42.153 00:05:42.153 ' 00:05:42.153 21:13:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.153 --rc genhtml_branch_coverage=1 00:05:42.153 --rc genhtml_function_coverage=1 00:05:42.153 --rc genhtml_legend=1 00:05:42.153 --rc geninfo_all_blocks=1 00:05:42.154 --rc geninfo_unexecuted_blocks=1 00:05:42.154 00:05:42.154 ' 00:05:42.154 21:13:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.154 --rc genhtml_branch_coverage=1 00:05:42.154 --rc genhtml_function_coverage=1 00:05:42.154 --rc genhtml_legend=1 00:05:42.154 --rc geninfo_all_blocks=1 00:05:42.154 --rc geninfo_unexecuted_blocks=1 00:05:42.154 00:05:42.154 ' 00:05:42.154 21:13:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.154 --rc genhtml_branch_coverage=1 00:05:42.154 --rc genhtml_function_coverage=1 00:05:42.154 --rc genhtml_legend=1 00:05:42.154 --rc geninfo_all_blocks=1 00:05:42.154 --rc geninfo_unexecuted_blocks=1 00:05:42.154 00:05:42.154 ' 00:05:42.154 21:13:05 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.154 21:13:05 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66641 00:05:42.154 21:13:05 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.154 21:13:05 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.154 21:13:05 -- scheduler/scheduler.sh@37 -- # waitforlisten 66641 00:05:42.154 21:13:05 -- common/autotest_common.sh@829 -- # '[' -z 66641 ']' 00:05:42.154 21:13:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.154 21:13:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.154 21:13:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.154 21:13:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.154 21:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.154 [2024-11-28 21:13:05.751588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:42.154 [2024-11-28 21:13:05.751969] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66641 ] 00:05:42.154 [2024-11-28 21:13:05.892282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.413 [2024-11-28 21:13:05.938497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.413 [2024-11-28 21:13:05.938626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.413 [2024-11-28 21:13:05.938749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.413 [2024-11-28 21:13:05.938750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.413 21:13:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.413 21:13:06 -- common/autotest_common.sh@862 -- # return 0 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 POWER: Env isn't set yet! 00:05:42.413 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:42.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.413 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.413 POWER: Attempting to initialise PSTAT power management... 00:05:42.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.413 POWER: Cannot set governor of lcore 0 to performance 00:05:42.413 POWER: Attempting to initialise CPPC power management... 00:05:42.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.413 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.413 POWER: Attempting to initialise VM power management... 00:05:42.413 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:42.413 POWER: Unable to set Power Management Environment for lcore 0 00:05:42.413 [2024-11-28 21:13:06.032678] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:42.413 [2024-11-28 21:13:06.032694] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:42.413 [2024-11-28 21:13:06.032705] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.413 [2024-11-28 21:13:06.032719] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.413 [2024-11-28 21:13:06.032728] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.413 [2024-11-28 21:13:06.032737] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 [2024-11-28 21:13:06.089239] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.413 21:13:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.413 21:13:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 ************************************ 00:05:42.413 START TEST scheduler_create_thread 00:05:42.413 ************************************ 00:05:42.413 21:13:06 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 2 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 3 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 4 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 5 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.413 6 00:05:42.413 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.413 21:13:06 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.413 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.413 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.672 7 00:05:42.672 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.672 21:13:06 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.672 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.672 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.672 8 00:05:42.672 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.672 21:13:06 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.672 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.672 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.672 9 00:05:42.673 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.673 21:13:06 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.673 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.673 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.673 10 00:05:42.673 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.673 21:13:06 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.673 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.673 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.673 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.673 21:13:06 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.673 21:13:06 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.673 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.673 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.673 21:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.673 21:13:06 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.673 21:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.673 21:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.046 21:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.046 21:13:07 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:44.046 21:13:07 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:44.046 21:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.046 21:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:44.983 ************************************ 00:05:44.983 END TEST scheduler_create_thread 00:05:44.983 ************************************ 00:05:44.983 21:13:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.983 00:05:44.983 real 0m2.616s 00:05:44.983 user 0m0.016s 00:05:44.983 sys 0m0.009s 00:05:44.983 21:13:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.983 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:05:45.243 21:13:08 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:45.243 21:13:08 -- scheduler/scheduler.sh@46 -- # killprocess 66641 00:05:45.243 21:13:08 -- common/autotest_common.sh@936 -- # '[' -z 66641 ']' 00:05:45.243 21:13:08 -- common/autotest_common.sh@940 -- # kill -0 66641 00:05:45.243 21:13:08 -- common/autotest_common.sh@941 -- # uname 00:05:45.243 21:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.243 21:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66641 00:05:45.243 killing process with pid 66641 00:05:45.243 21:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:45.243 21:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:45.243 21:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66641' 00:05:45.243 21:13:08 -- common/autotest_common.sh@955 -- # kill 66641 00:05:45.243 21:13:08 -- common/autotest_common.sh@960 -- # wait 66641 00:05:45.503 [2024-11-28 21:13:09.196933] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:45.762 ************************************ 00:05:45.762 END TEST event_scheduler 00:05:45.762 ************************************ 00:05:45.762 00:05:45.762 real 0m3.824s 00:05:45.762 user 0m5.728s 00:05:45.762 sys 0m0.332s 00:05:45.762 21:13:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.762 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:45.762 21:13:09 -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.762 21:13:09 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.762 21:13:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.762 21:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.762 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:45.762 ************************************ 00:05:45.762 START TEST app_repeat 00:05:45.762 ************************************ 00:05:45.762 21:13:09 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:45.762 21:13:09 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.762 21:13:09 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.762 21:13:09 -- event/event.sh@13 -- # local nbd_list 00:05:45.762 21:13:09 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.762 21:13:09 -- event/event.sh@14 -- # local bdev_list 00:05:45.762 21:13:09 -- event/event.sh@15 -- # local repeat_times=4 00:05:45.762 21:13:09 -- event/event.sh@17 -- # modprobe nbd 00:05:45.762 Process app_repeat pid: 66722 00:05:45.762 spdk_app_start Round 0 00:05:45.762 21:13:09 -- event/event.sh@19 -- # repeat_pid=66722 00:05:45.762 21:13:09 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.762 21:13:09 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:45.762 21:13:09 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66722' 00:05:45.762 21:13:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:45.762 21:13:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:45.762 21:13:09 -- event/event.sh@25 -- # waitforlisten 66722 /var/tmp/spdk-nbd.sock 00:05:45.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.762 21:13:09 -- common/autotest_common.sh@829 -- # '[' -z 66722 ']' 00:05:45.762 21:13:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.762 21:13:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.762 21:13:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.762 21:13:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.762 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:45.762 [2024-11-28 21:13:09.440274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:45.762 [2024-11-28 21:13:09.440521] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66722 ] 00:05:46.021 [2024-11-28 21:13:09.579560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.021 [2024-11-28 21:13:09.623660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.021 [2024-11-28 21:13:09.623666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.021 21:13:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.021 21:13:09 -- common/autotest_common.sh@862 -- # return 0 00:05:46.021 21:13:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.280 Malloc0 00:05:46.280 21:13:09 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.539 Malloc1 00:05:46.539 21:13:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@12 -- # local i 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.539 21:13:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.797 /dev/nbd0 00:05:46.797 21:13:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.797 21:13:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.797 21:13:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:46.797 21:13:10 -- common/autotest_common.sh@867 -- # local i 00:05:46.797 21:13:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.797 21:13:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.797 21:13:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.056 21:13:10 -- common/autotest_common.sh@871 -- # break 00:05:47.056 21:13:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.056 21:13:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.056 21:13:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.056 1+0 records in 00:05:47.056 1+0 records out 00:05:47.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030584 s, 13.4 MB/s 00:05:47.056 21:13:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.056 21:13:10 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.056 21:13:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.056 21:13:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.056 21:13:10 -- common/autotest_common.sh@887 -- # return 0 00:05:47.056 21:13:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.056 21:13:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.056 21:13:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.056 /dev/nbd1 00:05:47.056 21:13:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.056 21:13:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.056 21:13:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:47.056 21:13:10 -- common/autotest_common.sh@867 -- # local i 00:05:47.056 21:13:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.056 21:13:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.056 21:13:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:47.056 21:13:10 -- common/autotest_common.sh@871 -- # break 00:05:47.056 21:13:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.056 21:13:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.056 21:13:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.056 1+0 records in 00:05:47.056 1+0 records out 00:05:47.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343016 s, 11.9 MB/s 00:05:47.056 21:13:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.315 21:13:10 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.315 21:13:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.315 21:13:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.315 21:13:10 -- common/autotest_common.sh@887 -- # return 0 00:05:47.315 21:13:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.315 21:13:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.315 21:13:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.315 21:13:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.315 21:13:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.574 { 00:05:47.574 "nbd_device": "/dev/nbd0", 00:05:47.574 "bdev_name": "Malloc0" 00:05:47.574 }, 00:05:47.574 { 00:05:47.574 "nbd_device": "/dev/nbd1", 00:05:47.574 "bdev_name": "Malloc1" 00:05:47.574 } 00:05:47.574 ]' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.574 { 00:05:47.574 "nbd_device": "/dev/nbd0", 00:05:47.574 "bdev_name": "Malloc0" 00:05:47.574 }, 00:05:47.574 { 00:05:47.574 "nbd_device": "/dev/nbd1", 00:05:47.574 "bdev_name": "Malloc1" 00:05:47.574 } 00:05:47.574 ]' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.574 /dev/nbd1' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.574 /dev/nbd1' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.574 256+0 records in 00:05:47.574 256+0 records out 00:05:47.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668384 s, 157 MB/s 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.574 256+0 records in 00:05:47.574 256+0 records out 00:05:47.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258734 s, 40.5 MB/s 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.574 256+0 records in 00:05:47.574 256+0 records out 00:05:47.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030189 s, 34.7 MB/s 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.574 21:13:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@51 -- # local i 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.575 21:13:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@41 -- # break 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.833 21:13:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.834 21:13:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@41 -- # break 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.093 21:13:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@65 -- # true 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.351 21:13:12 -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.351 21:13:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.918 21:13:12 -- event/event.sh@35 -- # sleep 3 00:05:48.918 [2024-11-28 21:13:12.497890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.918 [2024-11-28 21:13:12.532982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.918 [2024-11-28 21:13:12.532995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.918 [2024-11-28 21:13:12.566919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.918 [2024-11-28 21:13:12.566980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.205 spdk_app_start Round 1 00:05:52.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.205 21:13:15 -- event/event.sh@23 -- # for i in {0..2} 00:05:52.205 21:13:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:52.205 21:13:15 -- event/event.sh@25 -- # waitforlisten 66722 /var/tmp/spdk-nbd.sock 00:05:52.205 21:13:15 -- common/autotest_common.sh@829 -- # '[' -z 66722 ']' 00:05:52.205 21:13:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.205 21:13:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.205 21:13:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.205 21:13:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.205 21:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.205 21:13:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.205 21:13:15 -- common/autotest_common.sh@862 -- # return 0 00:05:52.205 21:13:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.205 Malloc0 00:05:52.205 21:13:15 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.463 Malloc1 00:05:52.463 21:13:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.463 21:13:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.723 /dev/nbd0 00:05:52.723 21:13:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.723 21:13:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.723 21:13:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.723 21:13:16 -- common/autotest_common.sh@867 -- # local i 00:05:52.723 21:13:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.723 21:13:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.723 21:13:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.723 21:13:16 -- common/autotest_common.sh@871 -- # break 00:05:52.723 21:13:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.723 21:13:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.723 21:13:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.723 1+0 records in 00:05:52.723 1+0 records out 00:05:52.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206903 s, 19.8 MB/s 00:05:52.723 21:13:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.723 21:13:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:52.723 21:13:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.723 21:13:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.723 21:13:16 -- common/autotest_common.sh@887 -- # return 0 00:05:52.723 21:13:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.723 21:13:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.723 21:13:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.981 /dev/nbd1 00:05:52.981 21:13:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.981 21:13:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.981 21:13:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.981 21:13:16 -- common/autotest_common.sh@867 -- # local i 00:05:52.981 21:13:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.982 21:13:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.982 21:13:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.982 21:13:16 -- common/autotest_common.sh@871 -- # break 00:05:52.982 21:13:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.982 21:13:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.982 21:13:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.982 1+0 records in 00:05:52.982 1+0 records out 00:05:52.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503798 s, 8.1 MB/s 00:05:52.982 21:13:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.982 21:13:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:52.982 21:13:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.982 21:13:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.982 21:13:16 -- common/autotest_common.sh@887 -- # return 0 00:05:52.982 21:13:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.982 21:13:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.982 21:13:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.982 21:13:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.982 21:13:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.241 { 00:05:53.241 "nbd_device": "/dev/nbd0", 00:05:53.241 "bdev_name": "Malloc0" 00:05:53.241 }, 00:05:53.241 { 00:05:53.241 "nbd_device": "/dev/nbd1", 00:05:53.241 "bdev_name": "Malloc1" 00:05:53.241 } 00:05:53.241 ]' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.241 { 00:05:53.241 "nbd_device": "/dev/nbd0", 00:05:53.241 "bdev_name": "Malloc0" 00:05:53.241 }, 00:05:53.241 { 00:05:53.241 "nbd_device": "/dev/nbd1", 00:05:53.241 "bdev_name": "Malloc1" 00:05:53.241 } 00:05:53.241 ]' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.241 /dev/nbd1' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.241 /dev/nbd1' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.241 256+0 records in 00:05:53.241 256+0 records out 00:05:53.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107875 s, 97.2 MB/s 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.241 256+0 records in 00:05:53.241 256+0 records out 00:05:53.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251917 s, 41.6 MB/s 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.241 21:13:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.500 256+0 records in 00:05:53.500 256+0 records out 00:05:53.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285742 s, 36.7 MB/s 00:05:53.500 21:13:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.500 21:13:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.500 21:13:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.500 21:13:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.501 21:13:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@41 -- # break 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.759 21:13:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@41 -- # break 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.018 21:13:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@65 -- # true 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.277 21:13:17 -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.277 21:13:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.536 21:13:18 -- event/event.sh@35 -- # sleep 3 00:05:54.536 [2024-11-28 21:13:18.245930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.536 [2024-11-28 21:13:18.275778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.536 [2024-11-28 21:13:18.275790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.795 [2024-11-28 21:13:18.308116] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.795 [2024-11-28 21:13:18.308173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.085 spdk_app_start Round 2 00:05:58.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.085 21:13:21 -- event/event.sh@23 -- # for i in {0..2} 00:05:58.085 21:13:21 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:58.085 21:13:21 -- event/event.sh@25 -- # waitforlisten 66722 /var/tmp/spdk-nbd.sock 00:05:58.085 21:13:21 -- common/autotest_common.sh@829 -- # '[' -z 66722 ']' 00:05:58.085 21:13:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.085 21:13:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.085 21:13:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.085 21:13:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.085 21:13:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.085 21:13:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.085 21:13:21 -- common/autotest_common.sh@862 -- # return 0 00:05:58.085 21:13:21 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.085 Malloc0 00:05:58.085 21:13:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.343 Malloc1 00:05:58.343 21:13:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.343 21:13:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.343 21:13:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.343 21:13:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.343 21:13:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.343 21:13:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.344 21:13:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.602 /dev/nbd0 00:05:58.602 21:13:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.602 21:13:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.602 21:13:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.602 21:13:22 -- common/autotest_common.sh@867 -- # local i 00:05:58.602 21:13:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.602 21:13:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.602 21:13:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.602 21:13:22 -- common/autotest_common.sh@871 -- # break 00:05:58.602 21:13:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.602 21:13:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.603 21:13:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.603 1+0 records in 00:05:58.603 1+0 records out 00:05:58.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360013 s, 11.4 MB/s 00:05:58.603 21:13:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.603 21:13:22 -- common/autotest_common.sh@884 -- # size=4096 00:05:58.603 21:13:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.603 21:13:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.603 21:13:22 -- common/autotest_common.sh@887 -- # return 0 00:05:58.603 21:13:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.603 21:13:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.603 21:13:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.862 /dev/nbd1 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.862 21:13:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.862 21:13:22 -- common/autotest_common.sh@867 -- # local i 00:05:58.862 21:13:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.862 21:13:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.862 21:13:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.862 21:13:22 -- common/autotest_common.sh@871 -- # break 00:05:58.862 21:13:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.862 21:13:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.862 21:13:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.862 1+0 records in 00:05:58.862 1+0 records out 00:05:58.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341202 s, 12.0 MB/s 00:05:58.862 21:13:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.862 21:13:22 -- common/autotest_common.sh@884 -- # size=4096 00:05:58.862 21:13:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.862 21:13:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.862 21:13:22 -- common/autotest_common.sh@887 -- # return 0 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.862 21:13:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.122 21:13:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.122 { 00:05:59.122 "nbd_device": "/dev/nbd0", 00:05:59.122 "bdev_name": "Malloc0" 00:05:59.122 }, 00:05:59.122 { 00:05:59.122 "nbd_device": "/dev/nbd1", 00:05:59.122 "bdev_name": "Malloc1" 00:05:59.122 } 00:05:59.122 ]' 00:05:59.122 21:13:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.122 { 00:05:59.122 "nbd_device": "/dev/nbd0", 00:05:59.122 "bdev_name": "Malloc0" 00:05:59.122 }, 00:05:59.122 { 00:05:59.122 "nbd_device": "/dev/nbd1", 00:05:59.122 "bdev_name": "Malloc1" 00:05:59.122 } 00:05:59.122 ]' 00:05:59.122 21:13:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.381 /dev/nbd1' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.381 /dev/nbd1' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.381 256+0 records in 00:05:59.381 256+0 records out 00:05:59.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00751659 s, 140 MB/s 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.381 256+0 records in 00:05:59.381 256+0 records out 00:05:59.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020096 s, 52.2 MB/s 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.381 256+0 records in 00:05:59.381 256+0 records out 00:05:59.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262138 s, 40.0 MB/s 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@51 -- # local i 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.381 21:13:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@41 -- # break 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.641 21:13:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@41 -- # break 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.900 21:13:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@65 -- # true 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.158 21:13:23 -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.158 21:13:23 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.725 21:13:24 -- event/event.sh@35 -- # sleep 3 00:06:00.725 [2024-11-28 21:13:24.274690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.725 [2024-11-28 21:13:24.309032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.725 [2024-11-28 21:13:24.309038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.725 [2024-11-28 21:13:24.343054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.725 [2024-11-28 21:13:24.343122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.030 21:13:27 -- event/event.sh@38 -- # waitforlisten 66722 /var/tmp/spdk-nbd.sock 00:06:04.030 21:13:27 -- common/autotest_common.sh@829 -- # '[' -z 66722 ']' 00:06:04.030 21:13:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.030 21:13:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.030 21:13:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.030 21:13:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.030 21:13:27 -- common/autotest_common.sh@10 -- # set +x 00:06:04.030 21:13:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.030 21:13:27 -- common/autotest_common.sh@862 -- # return 0 00:06:04.030 21:13:27 -- event/event.sh@39 -- # killprocess 66722 00:06:04.030 21:13:27 -- common/autotest_common.sh@936 -- # '[' -z 66722 ']' 00:06:04.030 21:13:27 -- common/autotest_common.sh@940 -- # kill -0 66722 00:06:04.030 21:13:27 -- common/autotest_common.sh@941 -- # uname 00:06:04.030 21:13:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.030 21:13:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66722 00:06:04.030 killing process with pid 66722 00:06:04.030 21:13:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.030 21:13:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.030 21:13:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66722' 00:06:04.030 21:13:27 -- common/autotest_common.sh@955 -- # kill 66722 00:06:04.030 21:13:27 -- common/autotest_common.sh@960 -- # wait 66722 00:06:04.030 spdk_app_start is called in Round 0. 00:06:04.030 Shutdown signal received, stop current app iteration 00:06:04.030 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:04.030 spdk_app_start is called in Round 1. 00:06:04.030 Shutdown signal received, stop current app iteration 00:06:04.030 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:04.030 spdk_app_start is called in Round 2. 00:06:04.030 Shutdown signal received, stop current app iteration 00:06:04.030 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:04.030 spdk_app_start is called in Round 3. 00:06:04.030 Shutdown signal received, stop current app iteration 00:06:04.030 21:13:27 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.030 21:13:27 -- event/event.sh@42 -- # return 0 00:06:04.030 00:06:04.030 real 0m18.165s 00:06:04.030 user 0m41.502s 00:06:04.030 sys 0m2.485s 00:06:04.030 ************************************ 00:06:04.030 END TEST app_repeat 00:06:04.030 ************************************ 00:06:04.030 21:13:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.030 21:13:27 -- common/autotest_common.sh@10 -- # set +x 00:06:04.030 21:13:27 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.030 21:13:27 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.030 21:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.030 21:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.030 21:13:27 -- common/autotest_common.sh@10 -- # set +x 00:06:04.030 ************************************ 00:06:04.030 START TEST cpu_locks 00:06:04.030 ************************************ 00:06:04.030 21:13:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.030 * Looking for test storage... 00:06:04.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.030 21:13:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.030 21:13:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.031 21:13:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.290 21:13:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.290 21:13:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.290 21:13:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.290 21:13:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.290 21:13:27 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.290 21:13:27 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.290 21:13:27 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.290 21:13:27 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.290 21:13:27 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.290 21:13:27 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.290 21:13:27 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.290 21:13:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.290 21:13:27 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.290 21:13:27 -- scripts/common.sh@344 -- # : 1 00:06:04.290 21:13:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.290 21:13:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.290 21:13:27 -- scripts/common.sh@364 -- # decimal 1 00:06:04.290 21:13:27 -- scripts/common.sh@352 -- # local d=1 00:06:04.290 21:13:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.290 21:13:27 -- scripts/common.sh@354 -- # echo 1 00:06:04.290 21:13:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.290 21:13:27 -- scripts/common.sh@365 -- # decimal 2 00:06:04.290 21:13:27 -- scripts/common.sh@352 -- # local d=2 00:06:04.290 21:13:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.290 21:13:27 -- scripts/common.sh@354 -- # echo 2 00:06:04.290 21:13:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.290 21:13:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.290 21:13:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.290 21:13:27 -- scripts/common.sh@367 -- # return 0 00:06:04.290 21:13:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.290 21:13:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.290 --rc genhtml_branch_coverage=1 00:06:04.290 --rc genhtml_function_coverage=1 00:06:04.290 --rc genhtml_legend=1 00:06:04.290 --rc geninfo_all_blocks=1 00:06:04.290 --rc geninfo_unexecuted_blocks=1 00:06:04.290 00:06:04.290 ' 00:06:04.290 21:13:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.290 --rc genhtml_branch_coverage=1 00:06:04.290 --rc genhtml_function_coverage=1 00:06:04.290 --rc genhtml_legend=1 00:06:04.290 --rc geninfo_all_blocks=1 00:06:04.290 --rc geninfo_unexecuted_blocks=1 00:06:04.290 00:06:04.290 ' 00:06:04.290 21:13:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.290 --rc genhtml_branch_coverage=1 00:06:04.290 --rc genhtml_function_coverage=1 00:06:04.290 --rc genhtml_legend=1 00:06:04.290 --rc geninfo_all_blocks=1 00:06:04.290 --rc geninfo_unexecuted_blocks=1 00:06:04.290 00:06:04.290 ' 00:06:04.290 21:13:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.290 --rc genhtml_branch_coverage=1 00:06:04.290 --rc genhtml_function_coverage=1 00:06:04.290 --rc genhtml_legend=1 00:06:04.290 --rc geninfo_all_blocks=1 00:06:04.290 --rc geninfo_unexecuted_blocks=1 00:06:04.290 00:06:04.290 ' 00:06:04.290 21:13:27 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.290 21:13:27 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.290 21:13:27 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.290 21:13:27 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.290 21:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.290 21:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.290 21:13:27 -- common/autotest_common.sh@10 -- # set +x 00:06:04.290 ************************************ 00:06:04.290 START TEST default_locks 00:06:04.290 ************************************ 00:06:04.290 21:13:27 -- common/autotest_common.sh@1114 -- # default_locks 00:06:04.290 21:13:27 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67154 00:06:04.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.290 21:13:27 -- event/cpu_locks.sh@47 -- # waitforlisten 67154 00:06:04.290 21:13:27 -- common/autotest_common.sh@829 -- # '[' -z 67154 ']' 00:06:04.290 21:13:27 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.290 21:13:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.290 21:13:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.290 21:13:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.290 21:13:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.290 21:13:27 -- common/autotest_common.sh@10 -- # set +x 00:06:04.290 [2024-11-28 21:13:27.881993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:04.290 [2024-11-28 21:13:27.882090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67154 ] 00:06:04.290 [2024-11-28 21:13:28.015592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.550 [2024-11-28 21:13:28.050095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.550 [2024-11-28 21:13:28.050478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.117 21:13:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.117 21:13:28 -- common/autotest_common.sh@862 -- # return 0 00:06:05.117 21:13:28 -- event/cpu_locks.sh@49 -- # locks_exist 67154 00:06:05.117 21:13:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.117 21:13:28 -- event/cpu_locks.sh@22 -- # lslocks -p 67154 00:06:05.685 21:13:29 -- event/cpu_locks.sh@50 -- # killprocess 67154 00:06:05.685 21:13:29 -- common/autotest_common.sh@936 -- # '[' -z 67154 ']' 00:06:05.685 21:13:29 -- common/autotest_common.sh@940 -- # kill -0 67154 00:06:05.685 21:13:29 -- common/autotest_common.sh@941 -- # uname 00:06:05.685 21:13:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.685 21:13:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67154 00:06:05.685 killing process with pid 67154 00:06:05.685 21:13:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.685 21:13:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.685 21:13:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67154' 00:06:05.685 21:13:29 -- common/autotest_common.sh@955 -- # kill 67154 00:06:05.685 21:13:29 -- common/autotest_common.sh@960 -- # wait 67154 00:06:05.945 21:13:29 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67154 00:06:05.945 21:13:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.945 21:13:29 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67154 00:06:05.945 21:13:29 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.945 21:13:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.945 21:13:29 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.945 21:13:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.945 21:13:29 -- common/autotest_common.sh@653 -- # waitforlisten 67154 00:06:05.945 21:13:29 -- common/autotest_common.sh@829 -- # '[' -z 67154 ']' 00:06:05.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.945 21:13:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.945 21:13:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.945 21:13:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.945 ERROR: process (pid: 67154) is no longer running 00:06:05.945 21:13:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.945 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.945 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67154) - No such process 00:06:05.945 21:13:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.945 21:13:29 -- common/autotest_common.sh@862 -- # return 1 00:06:05.945 21:13:29 -- common/autotest_common.sh@653 -- # es=1 00:06:05.945 21:13:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.945 21:13:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.945 21:13:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.945 21:13:29 -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.945 21:13:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.945 21:13:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.945 21:13:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.945 00:06:05.945 real 0m1.714s 00:06:05.945 user 0m1.972s 00:06:05.945 sys 0m0.428s 00:06:05.945 21:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.945 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.945 ************************************ 00:06:05.945 END TEST default_locks 00:06:05.945 ************************************ 00:06:05.945 21:13:29 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.945 21:13:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.945 21:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.945 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.945 ************************************ 00:06:05.945 START TEST default_locks_via_rpc 00:06:05.945 ************************************ 00:06:05.945 21:13:29 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:05.945 21:13:29 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67206 00:06:05.945 21:13:29 -- event/cpu_locks.sh@63 -- # waitforlisten 67206 00:06:05.945 21:13:29 -- common/autotest_common.sh@829 -- # '[' -z 67206 ']' 00:06:05.945 21:13:29 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.945 21:13:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.945 21:13:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.945 21:13:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.945 21:13:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.945 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.945 [2024-11-28 21:13:29.647497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.945 [2024-11-28 21:13:29.647751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67206 ] 00:06:06.204 [2024-11-28 21:13:29.778912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.204 [2024-11-28 21:13:29.811606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.204 [2024-11-28 21:13:29.811979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.141 21:13:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.141 21:13:30 -- common/autotest_common.sh@862 -- # return 0 00:06:07.141 21:13:30 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.141 21:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.141 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.141 21:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.141 21:13:30 -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.141 21:13:30 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.141 21:13:30 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.141 21:13:30 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.141 21:13:30 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.141 21:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.141 21:13:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.141 21:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.142 21:13:30 -- event/cpu_locks.sh@71 -- # locks_exist 67206 00:06:07.142 21:13:30 -- event/cpu_locks.sh@22 -- # lslocks -p 67206 00:06:07.142 21:13:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.401 21:13:31 -- event/cpu_locks.sh@73 -- # killprocess 67206 00:06:07.401 21:13:31 -- common/autotest_common.sh@936 -- # '[' -z 67206 ']' 00:06:07.401 21:13:31 -- common/autotest_common.sh@940 -- # kill -0 67206 00:06:07.401 21:13:31 -- common/autotest_common.sh@941 -- # uname 00:06:07.401 21:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.401 21:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67206 00:06:07.401 killing process with pid 67206 00:06:07.401 21:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.401 21:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.401 21:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67206' 00:06:07.401 21:13:31 -- common/autotest_common.sh@955 -- # kill 67206 00:06:07.401 21:13:31 -- common/autotest_common.sh@960 -- # wait 67206 00:06:07.659 ************************************ 00:06:07.659 END TEST default_locks_via_rpc 00:06:07.659 ************************************ 00:06:07.659 00:06:07.659 real 0m1.738s 00:06:07.659 user 0m2.039s 00:06:07.659 sys 0m0.439s 00:06:07.659 21:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.659 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:07.659 21:13:31 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.659 21:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.659 21:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.659 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:07.659 ************************************ 00:06:07.659 START TEST non_locking_app_on_locked_coremask 00:06:07.659 ************************************ 00:06:07.659 21:13:31 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:07.659 21:13:31 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67252 00:06:07.659 21:13:31 -- event/cpu_locks.sh@81 -- # waitforlisten 67252 /var/tmp/spdk.sock 00:06:07.659 21:13:31 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.659 21:13:31 -- common/autotest_common.sh@829 -- # '[' -z 67252 ']' 00:06:07.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.659 21:13:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.659 21:13:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.659 21:13:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.659 21:13:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.659 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:07.918 [2024-11-28 21:13:31.449812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:07.918 [2024-11-28 21:13:31.449932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67252 ] 00:06:07.918 [2024-11-28 21:13:31.586721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.918 [2024-11-28 21:13:31.620857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.918 [2024-11-28 21:13:31.621072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.857 21:13:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.857 21:13:32 -- common/autotest_common.sh@862 -- # return 0 00:06:08.857 21:13:32 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.857 21:13:32 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67268 00:06:08.857 21:13:32 -- event/cpu_locks.sh@85 -- # waitforlisten 67268 /var/tmp/spdk2.sock 00:06:08.857 21:13:32 -- common/autotest_common.sh@829 -- # '[' -z 67268 ']' 00:06:08.857 21:13:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.857 21:13:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.857 21:13:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.857 21:13:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.857 21:13:32 -- common/autotest_common.sh@10 -- # set +x 00:06:08.857 [2024-11-28 21:13:32.414314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:08.857 [2024-11-28 21:13:32.414561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67268 ] 00:06:08.857 [2024-11-28 21:13:32.550141] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.857 [2024-11-28 21:13:32.550194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.117 [2024-11-28 21:13:32.617653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.117 [2024-11-28 21:13:32.617819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.685 21:13:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.685 21:13:33 -- common/autotest_common.sh@862 -- # return 0 00:06:09.685 21:13:33 -- event/cpu_locks.sh@87 -- # locks_exist 67252 00:06:09.685 21:13:33 -- event/cpu_locks.sh@22 -- # lslocks -p 67252 00:06:09.685 21:13:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.623 21:13:34 -- event/cpu_locks.sh@89 -- # killprocess 67252 00:06:10.623 21:13:34 -- common/autotest_common.sh@936 -- # '[' -z 67252 ']' 00:06:10.623 21:13:34 -- common/autotest_common.sh@940 -- # kill -0 67252 00:06:10.623 21:13:34 -- common/autotest_common.sh@941 -- # uname 00:06:10.623 21:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.623 21:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67252 00:06:10.623 21:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.623 21:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.623 killing process with pid 67252 00:06:10.623 21:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67252' 00:06:10.623 21:13:34 -- common/autotest_common.sh@955 -- # kill 67252 00:06:10.623 21:13:34 -- common/autotest_common.sh@960 -- # wait 67252 00:06:10.881 21:13:34 -- event/cpu_locks.sh@90 -- # killprocess 67268 00:06:10.881 21:13:34 -- common/autotest_common.sh@936 -- # '[' -z 67268 ']' 00:06:10.881 21:13:34 -- common/autotest_common.sh@940 -- # kill -0 67268 00:06:10.881 21:13:34 -- common/autotest_common.sh@941 -- # uname 00:06:11.140 21:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.140 21:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67268 00:06:11.140 21:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.140 21:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.140 killing process with pid 67268 00:06:11.140 21:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67268' 00:06:11.140 21:13:34 -- common/autotest_common.sh@955 -- # kill 67268 00:06:11.140 21:13:34 -- common/autotest_common.sh@960 -- # wait 67268 00:06:11.140 00:06:11.140 real 0m3.490s 00:06:11.140 user 0m4.106s 00:06:11.140 sys 0m0.861s 00:06:11.140 21:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.140 21:13:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.140 ************************************ 00:06:11.140 END TEST non_locking_app_on_locked_coremask 00:06:11.140 ************************************ 00:06:11.399 21:13:34 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.399 21:13:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.399 21:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.399 21:13:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.399 ************************************ 00:06:11.399 START TEST locking_app_on_unlocked_coremask 00:06:11.399 ************************************ 00:06:11.399 21:13:34 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:11.399 21:13:34 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67329 00:06:11.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.399 21:13:34 -- event/cpu_locks.sh@99 -- # waitforlisten 67329 /var/tmp/spdk.sock 00:06:11.399 21:13:34 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.399 21:13:34 -- common/autotest_common.sh@829 -- # '[' -z 67329 ']' 00:06:11.399 21:13:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.399 21:13:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.399 21:13:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.399 21:13:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.399 21:13:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.399 [2024-11-28 21:13:34.991828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.399 [2024-11-28 21:13:34.992148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67329 ] 00:06:11.399 [2024-11-28 21:13:35.132376] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.399 [2024-11-28 21:13:35.132577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.657 [2024-11-28 21:13:35.165733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.657 [2024-11-28 21:13:35.166175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.224 21:13:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.224 21:13:35 -- common/autotest_common.sh@862 -- # return 0 00:06:12.224 21:13:35 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67345 00:06:12.224 21:13:35 -- event/cpu_locks.sh@103 -- # waitforlisten 67345 /var/tmp/spdk2.sock 00:06:12.224 21:13:35 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.224 21:13:35 -- common/autotest_common.sh@829 -- # '[' -z 67345 ']' 00:06:12.224 21:13:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.224 21:13:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.224 21:13:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.224 21:13:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.224 21:13:35 -- common/autotest_common.sh@10 -- # set +x 00:06:12.484 [2024-11-28 21:13:36.017463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.484 [2024-11-28 21:13:36.018260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67345 ] 00:06:12.484 [2024-11-28 21:13:36.164604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.743 [2024-11-28 21:13:36.248584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.744 [2024-11-28 21:13:36.248760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.312 21:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.312 21:13:37 -- common/autotest_common.sh@862 -- # return 0 00:06:13.312 21:13:37 -- event/cpu_locks.sh@105 -- # locks_exist 67345 00:06:13.312 21:13:37 -- event/cpu_locks.sh@22 -- # lslocks -p 67345 00:06:13.312 21:13:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.249 21:13:37 -- event/cpu_locks.sh@107 -- # killprocess 67329 00:06:14.249 21:13:37 -- common/autotest_common.sh@936 -- # '[' -z 67329 ']' 00:06:14.249 21:13:37 -- common/autotest_common.sh@940 -- # kill -0 67329 00:06:14.249 21:13:37 -- common/autotest_common.sh@941 -- # uname 00:06:14.249 21:13:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.249 21:13:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67329 00:06:14.249 killing process with pid 67329 00:06:14.249 21:13:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.249 21:13:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.249 21:13:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67329' 00:06:14.249 21:13:37 -- common/autotest_common.sh@955 -- # kill 67329 00:06:14.249 21:13:37 -- common/autotest_common.sh@960 -- # wait 67329 00:06:14.818 21:13:38 -- event/cpu_locks.sh@108 -- # killprocess 67345 00:06:14.818 21:13:38 -- common/autotest_common.sh@936 -- # '[' -z 67345 ']' 00:06:14.818 21:13:38 -- common/autotest_common.sh@940 -- # kill -0 67345 00:06:14.818 21:13:38 -- common/autotest_common.sh@941 -- # uname 00:06:14.818 21:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.818 21:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67345 00:06:14.818 killing process with pid 67345 00:06:14.818 21:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.818 21:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.818 21:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67345' 00:06:14.818 21:13:38 -- common/autotest_common.sh@955 -- # kill 67345 00:06:14.818 21:13:38 -- common/autotest_common.sh@960 -- # wait 67345 00:06:14.818 ************************************ 00:06:14.818 END TEST locking_app_on_unlocked_coremask 00:06:14.818 ************************************ 00:06:14.818 00:06:14.818 real 0m3.608s 00:06:14.818 user 0m4.331s 00:06:14.818 sys 0m0.913s 00:06:14.818 21:13:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.818 21:13:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.078 21:13:38 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.078 21:13:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.078 21:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.078 21:13:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.078 ************************************ 00:06:15.078 START TEST locking_app_on_locked_coremask 00:06:15.078 ************************************ 00:06:15.078 21:13:38 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:15.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.078 21:13:38 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67407 00:06:15.078 21:13:38 -- event/cpu_locks.sh@116 -- # waitforlisten 67407 /var/tmp/spdk.sock 00:06:15.078 21:13:38 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.078 21:13:38 -- common/autotest_common.sh@829 -- # '[' -z 67407 ']' 00:06:15.078 21:13:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.078 21:13:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.078 21:13:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.078 21:13:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.078 21:13:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.078 [2024-11-28 21:13:38.650904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.078 [2024-11-28 21:13:38.651022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67407 ] 00:06:15.078 [2024-11-28 21:13:38.788262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.337 [2024-11-28 21:13:38.822442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.337 [2024-11-28 21:13:38.822840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.273 21:13:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.273 21:13:39 -- common/autotest_common.sh@862 -- # return 0 00:06:16.273 21:13:39 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67423 00:06:16.273 21:13:39 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.273 21:13:39 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67423 /var/tmp/spdk2.sock 00:06:16.273 21:13:39 -- common/autotest_common.sh@650 -- # local es=0 00:06:16.273 21:13:39 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67423 /var/tmp/spdk2.sock 00:06:16.273 21:13:39 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.274 21:13:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.274 21:13:39 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.274 21:13:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.274 21:13:39 -- common/autotest_common.sh@653 -- # waitforlisten 67423 /var/tmp/spdk2.sock 00:06:16.274 21:13:39 -- common/autotest_common.sh@829 -- # '[' -z 67423 ']' 00:06:16.274 21:13:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.274 21:13:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.274 21:13:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.274 21:13:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.274 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 [2024-11-28 21:13:39.704442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.274 [2024-11-28 21:13:39.704680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67423 ] 00:06:16.274 [2024-11-28 21:13:39.836979] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67407 has claimed it. 00:06:16.274 [2024-11-28 21:13:39.841062] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.842 ERROR: process (pid: 67423) is no longer running 00:06:16.842 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67423) - No such process 00:06:16.842 21:13:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.842 21:13:40 -- common/autotest_common.sh@862 -- # return 1 00:06:16.842 21:13:40 -- common/autotest_common.sh@653 -- # es=1 00:06:16.842 21:13:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.842 21:13:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.842 21:13:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.842 21:13:40 -- event/cpu_locks.sh@122 -- # locks_exist 67407 00:06:16.842 21:13:40 -- event/cpu_locks.sh@22 -- # lslocks -p 67407 00:06:16.842 21:13:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.100 21:13:40 -- event/cpu_locks.sh@124 -- # killprocess 67407 00:06:17.100 21:13:40 -- common/autotest_common.sh@936 -- # '[' -z 67407 ']' 00:06:17.100 21:13:40 -- common/autotest_common.sh@940 -- # kill -0 67407 00:06:17.100 21:13:40 -- common/autotest_common.sh@941 -- # uname 00:06:17.100 21:13:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.100 21:13:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67407 00:06:17.100 killing process with pid 67407 00:06:17.100 21:13:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.100 21:13:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.100 21:13:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67407' 00:06:17.100 21:13:40 -- common/autotest_common.sh@955 -- # kill 67407 00:06:17.100 21:13:40 -- common/autotest_common.sh@960 -- # wait 67407 00:06:17.359 ************************************ 00:06:17.359 END TEST locking_app_on_locked_coremask 00:06:17.359 ************************************ 00:06:17.359 00:06:17.359 real 0m2.455s 00:06:17.359 user 0m2.965s 00:06:17.359 sys 0m0.516s 00:06:17.359 21:13:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.359 21:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:17.359 21:13:41 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:17.359 21:13:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.359 21:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.359 21:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:17.359 ************************************ 00:06:17.359 START TEST locking_overlapped_coremask 00:06:17.359 ************************************ 00:06:17.359 21:13:41 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:17.359 21:13:41 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67468 00:06:17.359 21:13:41 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:17.359 21:13:41 -- event/cpu_locks.sh@133 -- # waitforlisten 67468 /var/tmp/spdk.sock 00:06:17.359 21:13:41 -- common/autotest_common.sh@829 -- # '[' -z 67468 ']' 00:06:17.359 21:13:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.359 21:13:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.359 21:13:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.359 21:13:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.359 21:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:17.618 [2024-11-28 21:13:41.145029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.618 [2024-11-28 21:13:41.145147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67468 ] 00:06:17.618 [2024-11-28 21:13:41.276307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.618 [2024-11-28 21:13:41.308744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.618 [2024-11-28 21:13:41.309315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.618 [2024-11-28 21:13:41.309365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.618 [2024-11-28 21:13:41.309370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.553 21:13:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.553 21:13:42 -- common/autotest_common.sh@862 -- # return 0 00:06:18.553 21:13:42 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.553 21:13:42 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67486 00:06:18.553 21:13:42 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67486 /var/tmp/spdk2.sock 00:06:18.553 21:13:42 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.553 21:13:42 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67486 /var/tmp/spdk2.sock 00:06:18.553 21:13:42 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.553 21:13:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.553 21:13:42 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.553 21:13:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.553 21:13:42 -- common/autotest_common.sh@653 -- # waitforlisten 67486 /var/tmp/spdk2.sock 00:06:18.553 21:13:42 -- common/autotest_common.sh@829 -- # '[' -z 67486 ']' 00:06:18.553 21:13:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.553 21:13:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.553 21:13:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.553 21:13:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.553 21:13:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.553 [2024-11-28 21:13:42.241606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.553 [2024-11-28 21:13:42.241677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67486 ] 00:06:18.811 [2024-11-28 21:13:42.377942] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67468 has claimed it. 00:06:18.811 [2024-11-28 21:13:42.378021] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.378 ERROR: process (pid: 67486) is no longer running 00:06:19.378 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67486) - No such process 00:06:19.378 21:13:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.378 21:13:43 -- common/autotest_common.sh@862 -- # return 1 00:06:19.378 21:13:43 -- common/autotest_common.sh@653 -- # es=1 00:06:19.378 21:13:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.378 21:13:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.378 21:13:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.378 21:13:43 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.378 21:13:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.378 21:13:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.378 21:13:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.378 21:13:43 -- event/cpu_locks.sh@141 -- # killprocess 67468 00:06:19.378 21:13:43 -- common/autotest_common.sh@936 -- # '[' -z 67468 ']' 00:06:19.378 21:13:43 -- common/autotest_common.sh@940 -- # kill -0 67468 00:06:19.378 21:13:43 -- common/autotest_common.sh@941 -- # uname 00:06:19.378 21:13:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.378 21:13:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67468 00:06:19.378 killing process with pid 67468 00:06:19.378 21:13:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.378 21:13:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.378 21:13:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67468' 00:06:19.378 21:13:43 -- common/autotest_common.sh@955 -- # kill 67468 00:06:19.378 21:13:43 -- common/autotest_common.sh@960 -- # wait 67468 00:06:19.637 00:06:19.637 real 0m2.159s 00:06:19.637 user 0m6.430s 00:06:19.637 sys 0m0.329s 00:06:19.637 21:13:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.637 ************************************ 00:06:19.637 END TEST locking_overlapped_coremask 00:06:19.637 ************************************ 00:06:19.637 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 21:13:43 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.637 21:13:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.637 21:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.637 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 ************************************ 00:06:19.637 START TEST locking_overlapped_coremask_via_rpc 00:06:19.637 ************************************ 00:06:19.637 21:13:43 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:19.637 21:13:43 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67532 00:06:19.637 21:13:43 -- event/cpu_locks.sh@149 -- # waitforlisten 67532 /var/tmp/spdk.sock 00:06:19.637 21:13:43 -- common/autotest_common.sh@829 -- # '[' -z 67532 ']' 00:06:19.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.637 21:13:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.637 21:13:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.637 21:13:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.637 21:13:43 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.637 21:13:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.637 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 [2024-11-28 21:13:43.349128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.637 [2024-11-28 21:13:43.349212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67532 ] 00:06:19.896 [2024-11-28 21:13:43.483257] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.896 [2024-11-28 21:13:43.483484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.896 [2024-11-28 21:13:43.515249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.896 [2024-11-28 21:13:43.515820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.896 [2024-11-28 21:13:43.515899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.896 [2024-11-28 21:13:43.515902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.831 21:13:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.831 21:13:44 -- common/autotest_common.sh@862 -- # return 0 00:06:20.831 21:13:44 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67550 00:06:20.831 21:13:44 -- event/cpu_locks.sh@153 -- # waitforlisten 67550 /var/tmp/spdk2.sock 00:06:20.831 21:13:44 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.831 21:13:44 -- common/autotest_common.sh@829 -- # '[' -z 67550 ']' 00:06:20.831 21:13:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.831 21:13:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.831 21:13:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.831 21:13:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.831 21:13:44 -- common/autotest_common.sh@10 -- # set +x 00:06:20.831 [2024-11-28 21:13:44.349519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.832 [2024-11-28 21:13:44.349823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67550 ] 00:06:20.832 [2024-11-28 21:13:44.493050] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.832 [2024-11-28 21:13:44.493100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.832 [2024-11-28 21:13:44.557928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.832 [2024-11-28 21:13:44.558248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.832 [2024-11-28 21:13:44.558838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.832 [2024-11-28 21:13:44.558861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.763 21:13:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.763 21:13:45 -- common/autotest_common.sh@862 -- # return 0 00:06:21.763 21:13:45 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.764 21:13:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.764 21:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 21:13:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.764 21:13:45 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.764 21:13:45 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.764 21:13:45 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.764 21:13:45 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:21.764 21:13:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.764 21:13:45 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:21.764 21:13:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.764 21:13:45 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.764 21:13:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.764 21:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 [2024-11-28 21:13:45.305122] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67532 has claimed it. 00:06:21.764 request: 00:06:21.764 { 00:06:21.764 "method": "framework_enable_cpumask_locks", 00:06:21.764 "req_id": 1 00:06:21.764 } 00:06:21.764 Got JSON-RPC error response 00:06:21.764 response: 00:06:21.764 { 00:06:21.764 "code": -32603, 00:06:21.764 "message": "Failed to claim CPU core: 2" 00:06:21.764 } 00:06:21.764 21:13:45 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.764 21:13:45 -- common/autotest_common.sh@653 -- # es=1 00:06:21.764 21:13:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.764 21:13:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.764 21:13:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.764 21:13:45 -- event/cpu_locks.sh@158 -- # waitforlisten 67532 /var/tmp/spdk.sock 00:06:21.764 21:13:45 -- common/autotest_common.sh@829 -- # '[' -z 67532 ']' 00:06:21.764 21:13:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.764 21:13:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.764 21:13:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.764 21:13:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.764 21:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.021 21:13:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.021 21:13:45 -- common/autotest_common.sh@862 -- # return 0 00:06:22.021 21:13:45 -- event/cpu_locks.sh@159 -- # waitforlisten 67550 /var/tmp/spdk2.sock 00:06:22.021 21:13:45 -- common/autotest_common.sh@829 -- # '[' -z 67550 ']' 00:06:22.021 21:13:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.021 21:13:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.021 21:13:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.021 21:13:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.021 21:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.280 21:13:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.280 21:13:45 -- common/autotest_common.sh@862 -- # return 0 00:06:22.280 21:13:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.280 21:13:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.280 21:13:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.280 21:13:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.280 00:06:22.280 real 0m2.544s 00:06:22.280 user 0m1.308s 00:06:22.280 sys 0m0.175s 00:06:22.280 21:13:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.280 21:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.280 ************************************ 00:06:22.280 END TEST locking_overlapped_coremask_via_rpc 00:06:22.280 ************************************ 00:06:22.280 21:13:45 -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.280 21:13:45 -- event/cpu_locks.sh@15 -- # [[ -z 67532 ]] 00:06:22.280 21:13:45 -- event/cpu_locks.sh@15 -- # killprocess 67532 00:06:22.280 21:13:45 -- common/autotest_common.sh@936 -- # '[' -z 67532 ']' 00:06:22.280 21:13:45 -- common/autotest_common.sh@940 -- # kill -0 67532 00:06:22.280 21:13:45 -- common/autotest_common.sh@941 -- # uname 00:06:22.280 21:13:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.280 21:13:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67532 00:06:22.280 21:13:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.280 killing process with pid 67532 00:06:22.280 21:13:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.280 21:13:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67532' 00:06:22.280 21:13:45 -- common/autotest_common.sh@955 -- # kill 67532 00:06:22.280 21:13:45 -- common/autotest_common.sh@960 -- # wait 67532 00:06:22.538 21:13:46 -- event/cpu_locks.sh@16 -- # [[ -z 67550 ]] 00:06:22.538 21:13:46 -- event/cpu_locks.sh@16 -- # killprocess 67550 00:06:22.538 21:13:46 -- common/autotest_common.sh@936 -- # '[' -z 67550 ']' 00:06:22.538 21:13:46 -- common/autotest_common.sh@940 -- # kill -0 67550 00:06:22.538 21:13:46 -- common/autotest_common.sh@941 -- # uname 00:06:22.538 21:13:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.538 21:13:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67550 00:06:22.538 21:13:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:22.538 21:13:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:22.538 killing process with pid 67550 00:06:22.538 21:13:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67550' 00:06:22.538 21:13:46 -- common/autotest_common.sh@955 -- # kill 67550 00:06:22.538 21:13:46 -- common/autotest_common.sh@960 -- # wait 67550 00:06:22.797 21:13:46 -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.797 21:13:46 -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.797 21:13:46 -- event/cpu_locks.sh@15 -- # [[ -z 67532 ]] 00:06:22.797 21:13:46 -- event/cpu_locks.sh@15 -- # killprocess 67532 00:06:22.797 21:13:46 -- common/autotest_common.sh@936 -- # '[' -z 67532 ']' 00:06:22.797 21:13:46 -- common/autotest_common.sh@940 -- # kill -0 67532 00:06:22.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67532) - No such process 00:06:22.798 Process with pid 67532 is not found 00:06:22.798 21:13:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67532 is not found' 00:06:22.798 21:13:46 -- event/cpu_locks.sh@16 -- # [[ -z 67550 ]] 00:06:22.798 21:13:46 -- event/cpu_locks.sh@16 -- # killprocess 67550 00:06:22.798 21:13:46 -- common/autotest_common.sh@936 -- # '[' -z 67550 ']' 00:06:22.798 21:13:46 -- common/autotest_common.sh@940 -- # kill -0 67550 00:06:22.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67550) - No such process 00:06:22.798 Process with pid 67550 is not found 00:06:22.798 21:13:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67550 is not found' 00:06:22.798 21:13:46 -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.798 00:06:22.798 real 0m18.767s 00:06:22.798 user 0m34.732s 00:06:22.798 sys 0m4.322s 00:06:22.798 21:13:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.798 21:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.798 ************************************ 00:06:22.798 END TEST cpu_locks 00:06:22.798 ************************************ 00:06:22.798 00:06:22.798 real 0m45.024s 00:06:22.798 user 1m28.468s 00:06:22.798 sys 0m7.529s 00:06:22.798 21:13:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.798 21:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.798 ************************************ 00:06:22.798 END TEST event 00:06:22.798 ************************************ 00:06:22.798 21:13:46 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.798 21:13:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.798 21:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.798 21:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.798 ************************************ 00:06:22.798 START TEST thread 00:06:22.798 ************************************ 00:06:22.798 21:13:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.057 * Looking for test storage... 00:06:23.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:23.057 21:13:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:23.057 21:13:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:23.057 21:13:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:23.057 21:13:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:23.057 21:13:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:23.057 21:13:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:23.057 21:13:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:23.057 21:13:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:23.057 21:13:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:23.057 21:13:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.057 21:13:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:23.057 21:13:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:23.057 21:13:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:23.057 21:13:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:23.057 21:13:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:23.057 21:13:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:23.057 21:13:46 -- scripts/common.sh@344 -- # : 1 00:06:23.057 21:13:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:23.057 21:13:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.057 21:13:46 -- scripts/common.sh@364 -- # decimal 1 00:06:23.057 21:13:46 -- scripts/common.sh@352 -- # local d=1 00:06:23.057 21:13:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.057 21:13:46 -- scripts/common.sh@354 -- # echo 1 00:06:23.057 21:13:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:23.057 21:13:46 -- scripts/common.sh@365 -- # decimal 2 00:06:23.057 21:13:46 -- scripts/common.sh@352 -- # local d=2 00:06:23.057 21:13:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.057 21:13:46 -- scripts/common.sh@354 -- # echo 2 00:06:23.057 21:13:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:23.057 21:13:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:23.057 21:13:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:23.057 21:13:46 -- scripts/common.sh@367 -- # return 0 00:06:23.057 21:13:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.057 21:13:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.057 --rc genhtml_branch_coverage=1 00:06:23.057 --rc genhtml_function_coverage=1 00:06:23.058 --rc genhtml_legend=1 00:06:23.058 --rc geninfo_all_blocks=1 00:06:23.058 --rc geninfo_unexecuted_blocks=1 00:06:23.058 00:06:23.058 ' 00:06:23.058 21:13:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:23.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.058 --rc genhtml_branch_coverage=1 00:06:23.058 --rc genhtml_function_coverage=1 00:06:23.058 --rc genhtml_legend=1 00:06:23.058 --rc geninfo_all_blocks=1 00:06:23.058 --rc geninfo_unexecuted_blocks=1 00:06:23.058 00:06:23.058 ' 00:06:23.058 21:13:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:23.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.058 --rc genhtml_branch_coverage=1 00:06:23.058 --rc genhtml_function_coverage=1 00:06:23.058 --rc genhtml_legend=1 00:06:23.058 --rc geninfo_all_blocks=1 00:06:23.058 --rc geninfo_unexecuted_blocks=1 00:06:23.058 00:06:23.058 ' 00:06:23.058 21:13:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:23.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.058 --rc genhtml_branch_coverage=1 00:06:23.058 --rc genhtml_function_coverage=1 00:06:23.058 --rc genhtml_legend=1 00:06:23.058 --rc geninfo_all_blocks=1 00:06:23.058 --rc geninfo_unexecuted_blocks=1 00:06:23.058 00:06:23.058 ' 00:06:23.058 21:13:46 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.058 21:13:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:23.058 21:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.058 21:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.058 ************************************ 00:06:23.058 START TEST thread_poller_perf 00:06:23.058 ************************************ 00:06:23.058 21:13:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.058 [2024-11-28 21:13:46.690184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:23.058 [2024-11-28 21:13:46.690285] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67674 ] 00:06:23.317 [2024-11-28 21:13:46.821046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.317 [2024-11-28 21:13:46.850145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.317 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.332 [2024-11-28T21:13:48.075Z] ====================================== 00:06:24.332 [2024-11-28T21:13:48.075Z] busy:2211057320 (cyc) 00:06:24.332 [2024-11-28T21:13:48.075Z] total_run_count: 348000 00:06:24.332 [2024-11-28T21:13:48.075Z] tsc_hz: 2200000000 (cyc) 00:06:24.332 [2024-11-28T21:13:48.075Z] ====================================== 00:06:24.332 [2024-11-28T21:13:48.075Z] poller_cost: 6353 (cyc), 2887 (nsec) 00:06:24.332 00:06:24.332 real 0m1.228s 00:06:24.332 user 0m1.087s 00:06:24.332 sys 0m0.034s 00:06:24.332 21:13:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.332 21:13:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.332 ************************************ 00:06:24.332 END TEST thread_poller_perf 00:06:24.332 ************************************ 00:06:24.332 21:13:47 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.332 21:13:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:24.332 21:13:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.332 21:13:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.332 ************************************ 00:06:24.332 START TEST thread_poller_perf 00:06:24.332 ************************************ 00:06:24.332 21:13:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.332 [2024-11-28 21:13:47.971226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.332 [2024-11-28 21:13:47.971314] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67704 ] 00:06:24.591 [2024-11-28 21:13:48.107127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.591 [2024-11-28 21:13:48.135975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.591 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.529 [2024-11-28T21:13:49.272Z] ====================================== 00:06:25.529 [2024-11-28T21:13:49.272Z] busy:2202309112 (cyc) 00:06:25.529 [2024-11-28T21:13:49.272Z] total_run_count: 4903000 00:06:25.529 [2024-11-28T21:13:49.272Z] tsc_hz: 2200000000 (cyc) 00:06:25.529 [2024-11-28T21:13:49.272Z] ====================================== 00:06:25.529 [2024-11-28T21:13:49.272Z] poller_cost: 449 (cyc), 204 (nsec) 00:06:25.529 00:06:25.529 real 0m1.226s 00:06:25.529 user 0m1.085s 00:06:25.529 sys 0m0.037s 00:06:25.529 21:13:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.529 21:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:25.529 ************************************ 00:06:25.529 END TEST thread_poller_perf 00:06:25.529 ************************************ 00:06:25.529 21:13:49 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.529 00:06:25.529 real 0m2.743s 00:06:25.529 user 0m2.323s 00:06:25.529 sys 0m0.204s 00:06:25.529 21:13:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.529 21:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:25.529 ************************************ 00:06:25.529 END TEST thread 00:06:25.529 ************************************ 00:06:25.529 21:13:49 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:25.529 21:13:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.529 21:13:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.529 21:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:25.789 ************************************ 00:06:25.789 START TEST accel 00:06:25.789 ************************************ 00:06:25.789 21:13:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:25.789 * Looking for test storage... 00:06:25.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:25.789 21:13:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:25.789 21:13:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:25.789 21:13:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:25.789 21:13:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:25.789 21:13:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:25.789 21:13:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:25.789 21:13:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:25.789 21:13:49 -- scripts/common.sh@335 -- # IFS=.-: 00:06:25.789 21:13:49 -- scripts/common.sh@335 -- # read -ra ver1 00:06:25.789 21:13:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.789 21:13:49 -- scripts/common.sh@336 -- # read -ra ver2 00:06:25.789 21:13:49 -- scripts/common.sh@337 -- # local 'op=<' 00:06:25.789 21:13:49 -- scripts/common.sh@339 -- # ver1_l=2 00:06:25.789 21:13:49 -- scripts/common.sh@340 -- # ver2_l=1 00:06:25.789 21:13:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:25.789 21:13:49 -- scripts/common.sh@343 -- # case "$op" in 00:06:25.789 21:13:49 -- scripts/common.sh@344 -- # : 1 00:06:25.789 21:13:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:25.789 21:13:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.789 21:13:49 -- scripts/common.sh@364 -- # decimal 1 00:06:25.789 21:13:49 -- scripts/common.sh@352 -- # local d=1 00:06:25.789 21:13:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.789 21:13:49 -- scripts/common.sh@354 -- # echo 1 00:06:25.789 21:13:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:25.789 21:13:49 -- scripts/common.sh@365 -- # decimal 2 00:06:25.789 21:13:49 -- scripts/common.sh@352 -- # local d=2 00:06:25.789 21:13:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.789 21:13:49 -- scripts/common.sh@354 -- # echo 2 00:06:25.789 21:13:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:25.789 21:13:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:25.789 21:13:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:25.789 21:13:49 -- scripts/common.sh@367 -- # return 0 00:06:25.789 21:13:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.789 21:13:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.789 --rc genhtml_branch_coverage=1 00:06:25.789 --rc genhtml_function_coverage=1 00:06:25.789 --rc genhtml_legend=1 00:06:25.789 --rc geninfo_all_blocks=1 00:06:25.789 --rc geninfo_unexecuted_blocks=1 00:06:25.789 00:06:25.789 ' 00:06:25.789 21:13:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.789 --rc genhtml_branch_coverage=1 00:06:25.789 --rc genhtml_function_coverage=1 00:06:25.789 --rc genhtml_legend=1 00:06:25.789 --rc geninfo_all_blocks=1 00:06:25.789 --rc geninfo_unexecuted_blocks=1 00:06:25.789 00:06:25.789 ' 00:06:25.789 21:13:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.789 --rc genhtml_branch_coverage=1 00:06:25.789 --rc genhtml_function_coverage=1 00:06:25.789 --rc genhtml_legend=1 00:06:25.789 --rc geninfo_all_blocks=1 00:06:25.789 --rc geninfo_unexecuted_blocks=1 00:06:25.789 00:06:25.789 ' 00:06:25.789 21:13:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.789 --rc genhtml_branch_coverage=1 00:06:25.789 --rc genhtml_function_coverage=1 00:06:25.789 --rc genhtml_legend=1 00:06:25.789 --rc geninfo_all_blocks=1 00:06:25.789 --rc geninfo_unexecuted_blocks=1 00:06:25.789 00:06:25.789 ' 00:06:25.789 21:13:49 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:25.789 21:13:49 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:25.789 21:13:49 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.789 21:13:49 -- accel/accel.sh@59 -- # spdk_tgt_pid=67791 00:06:25.789 21:13:49 -- accel/accel.sh@60 -- # waitforlisten 67791 00:06:25.789 21:13:49 -- common/autotest_common.sh@829 -- # '[' -z 67791 ']' 00:06:25.789 21:13:49 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:25.789 21:13:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.789 21:13:49 -- accel/accel.sh@58 -- # build_accel_config 00:06:25.789 21:13:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.789 21:13:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.789 21:13:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.789 21:13:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.789 21:13:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.790 21:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:25.790 21:13:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.790 21:13:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.790 21:13:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.790 21:13:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.790 21:13:49 -- accel/accel.sh@42 -- # jq -r . 00:06:25.790 [2024-11-28 21:13:49.513411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.790 [2024-11-28 21:13:49.513517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67791 ] 00:06:26.049 [2024-11-28 21:13:49.651934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.049 [2024-11-28 21:13:49.682182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.049 [2024-11-28 21:13:49.682355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.985 21:13:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.985 21:13:50 -- common/autotest_common.sh@862 -- # return 0 00:06:26.985 21:13:50 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:26.985 21:13:50 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:26.986 21:13:50 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:26.986 21:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.986 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.986 21:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # IFS== 00:06:26.986 21:13:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:26.986 21:13:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:26.986 21:13:50 -- accel/accel.sh@67 -- # killprocess 67791 00:06:26.986 21:13:50 -- common/autotest_common.sh@936 -- # '[' -z 67791 ']' 00:06:26.986 21:13:50 -- common/autotest_common.sh@940 -- # kill -0 67791 00:06:26.986 21:13:50 -- common/autotest_common.sh@941 -- # uname 00:06:26.986 21:13:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.986 21:13:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67791 00:06:26.986 21:13:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.986 21:13:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.986 killing process with pid 67791 00:06:26.986 21:13:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67791' 00:06:26.986 21:13:50 -- common/autotest_common.sh@955 -- # kill 67791 00:06:26.986 21:13:50 -- common/autotest_common.sh@960 -- # wait 67791 00:06:27.246 21:13:50 -- accel/accel.sh@68 -- # trap - ERR 00:06:27.246 21:13:50 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:27.246 21:13:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:27.246 21:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.246 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.246 21:13:50 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:27.246 21:13:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:27.246 21:13:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.246 21:13:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.246 21:13:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.246 21:13:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.246 21:13:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.246 21:13:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.246 21:13:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.246 21:13:50 -- accel/accel.sh@42 -- # jq -r . 00:06:27.246 21:13:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.246 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.246 21:13:50 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:27.246 21:13:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.246 21:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.246 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.246 ************************************ 00:06:27.246 START TEST accel_missing_filename 00:06:27.246 ************************************ 00:06:27.246 21:13:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:27.246 21:13:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.246 21:13:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:27.246 21:13:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:27.246 21:13:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.246 21:13:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:27.246 21:13:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.246 21:13:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:27.246 21:13:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:27.246 21:13:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.246 21:13:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.246 21:13:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.246 21:13:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.246 21:13:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.246 21:13:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.246 21:13:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.246 21:13:50 -- accel/accel.sh@42 -- # jq -r . 00:06:27.246 [2024-11-28 21:13:50.899984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.246 [2024-11-28 21:13:50.900169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67837 ] 00:06:27.506 [2024-11-28 21:13:51.035600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.506 [2024-11-28 21:13:51.064715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.506 [2024-11-28 21:13:51.091429] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.506 [2024-11-28 21:13:51.127451] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:27.506 A filename is required. 00:06:27.506 21:13:51 -- common/autotest_common.sh@653 -- # es=234 00:06:27.506 21:13:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.506 21:13:51 -- common/autotest_common.sh@662 -- # es=106 00:06:27.506 21:13:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:27.506 21:13:51 -- common/autotest_common.sh@670 -- # es=1 00:06:27.506 21:13:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.506 00:06:27.506 real 0m0.300s 00:06:27.506 user 0m0.168s 00:06:27.506 sys 0m0.068s 00:06:27.506 21:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.506 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.506 ************************************ 00:06:27.506 END TEST accel_missing_filename 00:06:27.506 ************************************ 00:06:27.506 21:13:51 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.506 21:13:51 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:27.506 21:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.506 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.506 ************************************ 00:06:27.506 START TEST accel_compress_verify 00:06:27.506 ************************************ 00:06:27.506 21:13:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.506 21:13:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.506 21:13:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.506 21:13:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:27.506 21:13:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.506 21:13:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:27.506 21:13:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.506 21:13:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.506 21:13:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.506 21:13:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.506 21:13:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.506 21:13:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.506 21:13:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.506 21:13:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.506 21:13:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.506 21:13:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.506 21:13:51 -- accel/accel.sh@42 -- # jq -r . 00:06:27.766 [2024-11-28 21:13:51.251608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.766 [2024-11-28 21:13:51.251699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67856 ] 00:06:27.766 [2024-11-28 21:13:51.384866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.766 [2024-11-28 21:13:51.414898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.766 [2024-11-28 21:13:51.444639] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.766 [2024-11-28 21:13:51.482191] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:28.027 00:06:28.027 Compression does not support the verify option, aborting. 00:06:28.027 21:13:51 -- common/autotest_common.sh@653 -- # es=161 00:06:28.027 21:13:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.027 21:13:51 -- common/autotest_common.sh@662 -- # es=33 00:06:28.027 21:13:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:28.027 21:13:51 -- common/autotest_common.sh@670 -- # es=1 00:06:28.027 21:13:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.027 00:06:28.027 real 0m0.311s 00:06:28.027 user 0m0.187s 00:06:28.027 sys 0m0.072s 00:06:28.027 21:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.027 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.027 ************************************ 00:06:28.027 END TEST accel_compress_verify 00:06:28.027 ************************************ 00:06:28.027 21:13:51 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:28.027 21:13:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:28.027 21:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.027 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.027 ************************************ 00:06:28.027 START TEST accel_wrong_workload 00:06:28.027 ************************************ 00:06:28.027 21:13:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:28.027 21:13:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:28.027 21:13:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:28.027 21:13:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:28.027 21:13:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.027 21:13:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:28.027 21:13:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.027 21:13:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:28.027 21:13:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:28.027 21:13:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.027 21:13:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.027 21:13:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.027 21:13:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.027 21:13:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.027 21:13:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.027 21:13:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.027 21:13:51 -- accel/accel.sh@42 -- # jq -r . 00:06:28.027 Unsupported workload type: foobar 00:06:28.027 [2024-11-28 21:13:51.606875] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:28.027 accel_perf options: 00:06:28.027 [-h help message] 00:06:28.027 [-q queue depth per core] 00:06:28.027 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:28.027 [-T number of threads per core 00:06:28.027 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:28.027 [-t time in seconds] 00:06:28.027 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:28.027 [ dif_verify, , dif_generate, dif_generate_copy 00:06:28.027 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:28.027 [-l for compress/decompress workloads, name of uncompressed input file 00:06:28.027 [-S for crc32c workload, use this seed value (default 0) 00:06:28.027 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:28.027 [-f for fill workload, use this BYTE value (default 255) 00:06:28.027 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:28.027 [-y verify result if this switch is on] 00:06:28.027 [-a tasks to allocate per core (default: same value as -q)] 00:06:28.027 Can be used to spread operations across a wider range of memory. 00:06:28.027 21:13:51 -- common/autotest_common.sh@653 -- # es=1 00:06:28.027 21:13:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.027 21:13:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.027 21:13:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.027 00:06:28.027 real 0m0.028s 00:06:28.027 user 0m0.017s 00:06:28.027 sys 0m0.011s 00:06:28.027 21:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.027 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.027 ************************************ 00:06:28.027 END TEST accel_wrong_workload 00:06:28.027 ************************************ 00:06:28.027 21:13:51 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:28.027 21:13:51 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:28.027 21:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.027 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.027 ************************************ 00:06:28.027 START TEST accel_negative_buffers 00:06:28.027 ************************************ 00:06:28.027 21:13:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:28.027 21:13:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:28.027 21:13:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:28.027 21:13:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:28.027 21:13:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.027 21:13:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:28.027 21:13:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.027 21:13:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:28.027 21:13:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:28.027 21:13:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.027 21:13:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.027 21:13:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.027 21:13:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.027 21:13:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.027 21:13:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.027 21:13:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.027 21:13:51 -- accel/accel.sh@42 -- # jq -r . 00:06:28.027 -x option must be non-negative. 00:06:28.027 [2024-11-28 21:13:51.678515] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:28.027 accel_perf options: 00:06:28.027 [-h help message] 00:06:28.027 [-q queue depth per core] 00:06:28.027 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:28.027 [-T number of threads per core 00:06:28.027 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:28.027 [-t time in seconds] 00:06:28.027 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:28.027 [ dif_verify, , dif_generate, dif_generate_copy 00:06:28.027 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:28.027 [-l for compress/decompress workloads, name of uncompressed input file 00:06:28.028 [-S for crc32c workload, use this seed value (default 0) 00:06:28.028 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:28.028 [-f for fill workload, use this BYTE value (default 255) 00:06:28.028 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:28.028 [-y verify result if this switch is on] 00:06:28.028 [-a tasks to allocate per core (default: same value as -q)] 00:06:28.028 Can be used to spread operations across a wider range of memory. 00:06:28.028 21:13:51 -- common/autotest_common.sh@653 -- # es=1 00:06:28.028 21:13:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.028 21:13:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.028 21:13:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.028 00:06:28.028 real 0m0.026s 00:06:28.028 user 0m0.015s 00:06:28.028 sys 0m0.011s 00:06:28.028 21:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.028 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.028 ************************************ 00:06:28.028 END TEST accel_negative_buffers 00:06:28.028 ************************************ 00:06:28.028 21:13:51 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:28.028 21:13:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:28.028 21:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.028 21:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.028 ************************************ 00:06:28.028 START TEST accel_crc32c 00:06:28.028 ************************************ 00:06:28.028 21:13:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:28.028 21:13:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.028 21:13:51 -- accel/accel.sh@17 -- # local accel_module 00:06:28.028 21:13:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:28.028 21:13:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:28.028 21:13:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.028 21:13:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.028 21:13:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.028 21:13:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.028 21:13:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.028 21:13:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.028 21:13:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.028 21:13:51 -- accel/accel.sh@42 -- # jq -r . 00:06:28.028 [2024-11-28 21:13:51.754953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.028 [2024-11-28 21:13:51.755062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67920 ] 00:06:28.287 [2024-11-28 21:13:51.892864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.287 [2024-11-28 21:13:51.923091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.667 21:13:53 -- accel/accel.sh@18 -- # out=' 00:06:29.667 SPDK Configuration: 00:06:29.667 Core mask: 0x1 00:06:29.667 00:06:29.667 Accel Perf Configuration: 00:06:29.667 Workload Type: crc32c 00:06:29.667 CRC-32C seed: 32 00:06:29.667 Transfer size: 4096 bytes 00:06:29.667 Vector count 1 00:06:29.668 Module: software 00:06:29.668 Queue depth: 32 00:06:29.668 Allocate depth: 32 00:06:29.668 # threads/core: 1 00:06:29.668 Run time: 1 seconds 00:06:29.668 Verify: Yes 00:06:29.668 00:06:29.668 Running for 1 seconds... 00:06:29.668 00:06:29.668 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.668 ------------------------------------------------------------------------------------ 00:06:29.668 0,0 520256/s 2032 MiB/s 0 0 00:06:29.668 ==================================================================================== 00:06:29.668 Total 520256/s 2032 MiB/s 0 0' 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:29.668 21:13:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:29.668 21:13:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.668 21:13:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.668 21:13:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.668 21:13:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.668 21:13:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.668 21:13:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.668 21:13:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.668 21:13:53 -- accel/accel.sh@42 -- # jq -r . 00:06:29.668 [2024-11-28 21:13:53.063639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.668 [2024-11-28 21:13:53.063737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67934 ] 00:06:29.668 [2024-11-28 21:13:53.198281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.668 [2024-11-28 21:13:53.230795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=0x1 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=crc32c 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=32 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=software 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=32 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=32 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=1 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val=Yes 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:29.668 21:13:53 -- accel/accel.sh@21 -- # val= 00:06:29.668 21:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:29.668 21:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:30.605 21:13:54 -- accel/accel.sh@21 -- # val= 00:06:30.605 21:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # IFS=: 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # read -r var val 00:06:30.605 21:13:54 -- accel/accel.sh@21 -- # val= 00:06:30.605 21:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # IFS=: 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # read -r var val 00:06:30.605 21:13:54 -- accel/accel.sh@21 -- # val= 00:06:30.605 21:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # IFS=: 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # read -r var val 00:06:30.605 21:13:54 -- accel/accel.sh@21 -- # val= 00:06:30.605 21:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.605 21:13:54 -- accel/accel.sh@20 -- # IFS=: 00:06:30.865 21:13:54 -- accel/accel.sh@20 -- # read -r var val 00:06:30.865 21:13:54 -- accel/accel.sh@21 -- # val= 00:06:30.865 21:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.865 21:13:54 -- accel/accel.sh@20 -- # IFS=: 00:06:30.865 21:13:54 -- accel/accel.sh@20 -- # read -r var val 00:06:30.865 21:13:54 -- accel/accel.sh@21 -- # val= 00:06:30.865 21:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.865 21:13:54 -- accel/accel.sh@20 -- # IFS=: 00:06:30.865 21:13:54 -- accel/accel.sh@20 -- # read -r var val 00:06:30.865 21:13:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.865 21:13:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:30.865 21:13:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.865 ************************************ 00:06:30.865 END TEST accel_crc32c 00:06:30.865 ************************************ 00:06:30.865 00:06:30.865 real 0m2.620s 00:06:30.865 user 0m2.285s 00:06:30.865 sys 0m0.134s 00:06:30.865 21:13:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.865 21:13:54 -- common/autotest_common.sh@10 -- # set +x 00:06:30.865 21:13:54 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:30.865 21:13:54 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:30.865 21:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.865 21:13:54 -- common/autotest_common.sh@10 -- # set +x 00:06:30.865 ************************************ 00:06:30.865 START TEST accel_crc32c_C2 00:06:30.865 ************************************ 00:06:30.865 21:13:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:30.865 21:13:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.865 21:13:54 -- accel/accel.sh@17 -- # local accel_module 00:06:30.865 21:13:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.865 21:13:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:30.865 21:13:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.865 21:13:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.865 21:13:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.865 21:13:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.865 21:13:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.865 21:13:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.865 21:13:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.865 21:13:54 -- accel/accel.sh@42 -- # jq -r . 00:06:30.865 [2024-11-28 21:13:54.425669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.865 [2024-11-28 21:13:54.425757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67963 ] 00:06:30.865 [2024-11-28 21:13:54.559215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.865 [2024-11-28 21:13:54.592532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.241 21:13:55 -- accel/accel.sh@18 -- # out=' 00:06:32.241 SPDK Configuration: 00:06:32.241 Core mask: 0x1 00:06:32.241 00:06:32.241 Accel Perf Configuration: 00:06:32.241 Workload Type: crc32c 00:06:32.241 CRC-32C seed: 0 00:06:32.241 Transfer size: 4096 bytes 00:06:32.241 Vector count 2 00:06:32.241 Module: software 00:06:32.241 Queue depth: 32 00:06:32.241 Allocate depth: 32 00:06:32.241 # threads/core: 1 00:06:32.241 Run time: 1 seconds 00:06:32.241 Verify: Yes 00:06:32.241 00:06:32.241 Running for 1 seconds... 00:06:32.241 00:06:32.241 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.241 ------------------------------------------------------------------------------------ 00:06:32.241 0,0 406560/s 3176 MiB/s 0 0 00:06:32.241 ==================================================================================== 00:06:32.241 Total 406560/s 1588 MiB/s 0 0' 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.241 21:13:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.241 21:13:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.241 21:13:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.241 21:13:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.241 21:13:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.241 21:13:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.241 21:13:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.241 21:13:55 -- accel/accel.sh@42 -- # jq -r . 00:06:32.241 [2024-11-28 21:13:55.730640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.241 [2024-11-28 21:13:55.730729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67990 ] 00:06:32.241 [2024-11-28 21:13:55.866427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.241 [2024-11-28 21:13:55.904772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val=0x1 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val=crc32c 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val=0 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val=software 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val=32 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.241 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.241 21:13:55 -- accel/accel.sh@21 -- # val=32 00:06:32.241 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.242 21:13:55 -- accel/accel.sh@21 -- # val=1 00:06:32.242 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.242 21:13:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.242 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.242 21:13:55 -- accel/accel.sh@21 -- # val=Yes 00:06:32.242 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.242 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.242 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.242 21:13:55 -- accel/accel.sh@21 -- # val= 00:06:32.242 21:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.242 21:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@21 -- # val= 00:06:33.616 21:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@21 -- # val= 00:06:33.616 21:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@21 -- # val= 00:06:33.616 21:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@21 -- # val= 00:06:33.616 21:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@21 -- # val= 00:06:33.616 21:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@21 -- # val= 00:06:33.616 21:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:33.616 21:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:33.616 21:13:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.616 21:13:57 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:33.616 21:13:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.616 00:06:33.616 real 0m2.633s 00:06:33.616 user 0m2.301s 00:06:33.616 sys 0m0.138s 00:06:33.616 21:13:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.616 21:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:33.616 ************************************ 00:06:33.616 END TEST accel_crc32c_C2 00:06:33.616 ************************************ 00:06:33.616 21:13:57 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.616 21:13:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.616 21:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.616 21:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:33.616 ************************************ 00:06:33.616 START TEST accel_copy 00:06:33.616 ************************************ 00:06:33.616 21:13:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:33.617 21:13:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.617 21:13:57 -- accel/accel.sh@17 -- # local accel_module 00:06:33.617 21:13:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:33.617 21:13:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.617 21:13:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.617 21:13:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.617 21:13:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.617 21:13:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.617 21:13:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.617 21:13:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.617 21:13:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.617 21:13:57 -- accel/accel.sh@42 -- # jq -r . 00:06:33.617 [2024-11-28 21:13:57.104546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:33.617 [2024-11-28 21:13:57.104644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68019 ] 00:06:33.617 [2024-11-28 21:13:57.226679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.617 [2024-11-28 21:13:57.256170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.992 21:13:58 -- accel/accel.sh@18 -- # out=' 00:06:34.992 SPDK Configuration: 00:06:34.992 Core mask: 0x1 00:06:34.992 00:06:34.992 Accel Perf Configuration: 00:06:34.992 Workload Type: copy 00:06:34.992 Transfer size: 4096 bytes 00:06:34.992 Vector count 1 00:06:34.992 Module: software 00:06:34.992 Queue depth: 32 00:06:34.992 Allocate depth: 32 00:06:34.992 # threads/core: 1 00:06:34.992 Run time: 1 seconds 00:06:34.992 Verify: Yes 00:06:34.992 00:06:34.992 Running for 1 seconds... 00:06:34.992 00:06:34.992 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.992 ------------------------------------------------------------------------------------ 00:06:34.992 0,0 360864/s 1409 MiB/s 0 0 00:06:34.992 ==================================================================================== 00:06:34.992 Total 360864/s 1409 MiB/s 0 0' 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:34.992 21:13:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.992 21:13:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.992 21:13:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.992 21:13:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.992 21:13:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.992 21:13:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.992 21:13:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.992 21:13:58 -- accel/accel.sh@42 -- # jq -r . 00:06:34.992 [2024-11-28 21:13:58.392581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.992 [2024-11-28 21:13:58.392671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68033 ] 00:06:34.992 [2024-11-28 21:13:58.527414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.992 [2024-11-28 21:13:58.556584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val=0x1 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val=copy 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val=software 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val=32 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val=32 00:06:34.992 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.992 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.992 21:13:58 -- accel/accel.sh@21 -- # val=1 00:06:34.993 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.993 21:13:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.993 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.993 21:13:58 -- accel/accel.sh@21 -- # val=Yes 00:06:34.993 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.993 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.993 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.993 21:13:58 -- accel/accel.sh@21 -- # val= 00:06:34.993 21:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.993 21:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@21 -- # val= 00:06:36.367 21:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@21 -- # val= 00:06:36.367 21:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@21 -- # val= 00:06:36.367 21:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@21 -- # val= 00:06:36.367 21:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@21 -- # val= 00:06:36.367 21:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@21 -- # val= 00:06:36.367 21:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.367 21:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.367 21:13:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.367 21:13:59 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:36.367 21:13:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.367 00:06:36.367 real 0m2.595s 00:06:36.367 user 0m2.267s 00:06:36.367 sys 0m0.128s 00:06:36.367 21:13:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.367 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:06:36.367 ************************************ 00:06:36.367 END TEST accel_copy 00:06:36.367 ************************************ 00:06:36.367 21:13:59 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.367 21:13:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:36.367 21:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.367 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:06:36.367 ************************************ 00:06:36.367 START TEST accel_fill 00:06:36.367 ************************************ 00:06:36.367 21:13:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.367 21:13:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.367 21:13:59 -- accel/accel.sh@17 -- # local accel_module 00:06:36.367 21:13:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.367 21:13:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.367 21:13:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.367 21:13:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.367 21:13:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.367 21:13:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.367 21:13:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.367 21:13:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.367 21:13:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.367 21:13:59 -- accel/accel.sh@42 -- # jq -r . 00:06:36.367 [2024-11-28 21:13:59.743991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.367 [2024-11-28 21:13:59.744117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68073 ] 00:06:36.367 [2024-11-28 21:13:59.877376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.367 [2024-11-28 21:13:59.909050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.300 21:14:01 -- accel/accel.sh@18 -- # out=' 00:06:37.300 SPDK Configuration: 00:06:37.300 Core mask: 0x1 00:06:37.300 00:06:37.300 Accel Perf Configuration: 00:06:37.300 Workload Type: fill 00:06:37.300 Fill pattern: 0x80 00:06:37.300 Transfer size: 4096 bytes 00:06:37.300 Vector count 1 00:06:37.300 Module: software 00:06:37.300 Queue depth: 64 00:06:37.300 Allocate depth: 64 00:06:37.300 # threads/core: 1 00:06:37.300 Run time: 1 seconds 00:06:37.300 Verify: Yes 00:06:37.300 00:06:37.301 Running for 1 seconds... 00:06:37.301 00:06:37.301 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.301 ------------------------------------------------------------------------------------ 00:06:37.301 0,0 535552/s 2092 MiB/s 0 0 00:06:37.301 ==================================================================================== 00:06:37.301 Total 535552/s 2092 MiB/s 0 0' 00:06:37.301 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.301 21:14:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.301 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.301 21:14:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.301 21:14:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.301 21:14:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.301 21:14:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.301 21:14:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.301 21:14:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.301 21:14:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.301 21:14:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.301 21:14:01 -- accel/accel.sh@42 -- # jq -r . 00:06:37.559 [2024-11-28 21:14:01.059913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.559 [2024-11-28 21:14:01.060023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68087 ] 00:06:37.559 [2024-11-28 21:14:01.195816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.559 [2024-11-28 21:14:01.227367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=0x1 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=fill 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=0x80 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=software 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=64 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=64 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=1 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val=Yes 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 21:14:01 -- accel/accel.sh@21 -- # val= 00:06:37.559 21:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 21:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@21 -- # val= 00:06:38.932 21:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@21 -- # val= 00:06:38.932 21:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@21 -- # val= 00:06:38.932 21:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@21 -- # val= 00:06:38.932 21:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@21 -- # val= 00:06:38.932 21:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@21 -- # val= 00:06:38.932 21:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:38.932 21:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:38.932 21:14:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.932 21:14:02 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:38.932 21:14:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.932 00:06:38.932 real 0m2.625s 00:06:38.932 user 0m2.287s 00:06:38.932 sys 0m0.142s 00:06:38.932 21:14:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.932 21:14:02 -- common/autotest_common.sh@10 -- # set +x 00:06:38.932 ************************************ 00:06:38.932 END TEST accel_fill 00:06:38.932 ************************************ 00:06:38.932 21:14:02 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:38.932 21:14:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.932 21:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.932 21:14:02 -- common/autotest_common.sh@10 -- # set +x 00:06:38.932 ************************************ 00:06:38.932 START TEST accel_copy_crc32c 00:06:38.932 ************************************ 00:06:38.932 21:14:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:38.932 21:14:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.932 21:14:02 -- accel/accel.sh@17 -- # local accel_module 00:06:38.932 21:14:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:38.932 21:14:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:38.932 21:14:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.932 21:14:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.932 21:14:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.932 21:14:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.932 21:14:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.932 21:14:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.932 21:14:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.932 21:14:02 -- accel/accel.sh@42 -- # jq -r . 00:06:38.932 [2024-11-28 21:14:02.419988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:38.932 [2024-11-28 21:14:02.420091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68116 ] 00:06:38.932 [2024-11-28 21:14:02.555694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.932 [2024-11-28 21:14:02.585715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.307 21:14:03 -- accel/accel.sh@18 -- # out=' 00:06:40.307 SPDK Configuration: 00:06:40.307 Core mask: 0x1 00:06:40.307 00:06:40.307 Accel Perf Configuration: 00:06:40.307 Workload Type: copy_crc32c 00:06:40.307 CRC-32C seed: 0 00:06:40.307 Vector size: 4096 bytes 00:06:40.307 Transfer size: 4096 bytes 00:06:40.307 Vector count 1 00:06:40.307 Module: software 00:06:40.307 Queue depth: 32 00:06:40.307 Allocate depth: 32 00:06:40.307 # threads/core: 1 00:06:40.307 Run time: 1 seconds 00:06:40.307 Verify: Yes 00:06:40.307 00:06:40.307 Running for 1 seconds... 00:06:40.307 00:06:40.307 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.307 ------------------------------------------------------------------------------------ 00:06:40.307 0,0 289728/s 1131 MiB/s 0 0 00:06:40.307 ==================================================================================== 00:06:40.307 Total 289728/s 1131 MiB/s 0 0' 00:06:40.307 21:14:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:40.307 21:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.307 21:14:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.307 21:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.307 21:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.307 21:14:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.307 21:14:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.307 21:14:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.307 21:14:03 -- accel/accel.sh@42 -- # jq -r . 00:06:40.307 [2024-11-28 21:14:03.719717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.307 [2024-11-28 21:14:03.719795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68136 ] 00:06:40.307 [2024-11-28 21:14:03.847101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.307 [2024-11-28 21:14:03.879891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val=0x1 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val=0 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val=software 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.307 21:14:03 -- accel/accel.sh@21 -- # val=32 00:06:40.307 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.307 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.308 21:14:03 -- accel/accel.sh@21 -- # val=32 00:06:40.308 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.308 21:14:03 -- accel/accel.sh@21 -- # val=1 00:06:40.308 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.308 21:14:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.308 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.308 21:14:03 -- accel/accel.sh@21 -- # val=Yes 00:06:40.308 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.308 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.308 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.308 21:14:03 -- accel/accel.sh@21 -- # val= 00:06:40.308 21:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.308 21:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:04 -- accel/accel.sh@21 -- # val= 00:06:41.689 21:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.689 21:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.689 21:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:04 -- accel/accel.sh@21 -- # val= 00:06:41.689 21:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.689 21:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.689 21:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:04 -- accel/accel.sh@21 -- # val= 00:06:41.689 21:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.689 21:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.689 21:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:04 -- accel/accel.sh@21 -- # val= 00:06:41.689 21:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.689 21:14:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.689 21:14:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:05 -- accel/accel.sh@21 -- # val= 00:06:41.689 21:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.689 21:14:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.689 21:14:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:05 -- accel/accel.sh@21 -- # val= 00:06:41.689 21:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.689 21:14:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.689 21:14:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.689 21:14:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.689 21:14:05 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:41.689 21:14:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.689 00:06:41.689 real 0m2.608s 00:06:41.689 user 0m2.276s 00:06:41.689 sys 0m0.134s 00:06:41.689 21:14:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.689 ************************************ 00:06:41.689 END TEST accel_copy_crc32c 00:06:41.689 ************************************ 00:06:41.689 21:14:05 -- common/autotest_common.sh@10 -- # set +x 00:06:41.689 21:14:05 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.690 21:14:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:41.690 21:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.690 21:14:05 -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 ************************************ 00:06:41.690 START TEST accel_copy_crc32c_C2 00:06:41.690 ************************************ 00:06:41.690 21:14:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.690 21:14:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.690 21:14:05 -- accel/accel.sh@17 -- # local accel_module 00:06:41.690 21:14:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:41.690 21:14:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:41.690 21:14:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.690 21:14:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.690 21:14:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.690 21:14:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.690 21:14:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.690 21:14:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.690 21:14:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.690 21:14:05 -- accel/accel.sh@42 -- # jq -r . 00:06:41.690 [2024-11-28 21:14:05.079717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.690 [2024-11-28 21:14:05.079806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68170 ] 00:06:41.690 [2024-11-28 21:14:05.216307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.690 [2024-11-28 21:14:05.245365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.628 21:14:06 -- accel/accel.sh@18 -- # out=' 00:06:42.628 SPDK Configuration: 00:06:42.628 Core mask: 0x1 00:06:42.628 00:06:42.628 Accel Perf Configuration: 00:06:42.628 Workload Type: copy_crc32c 00:06:42.628 CRC-32C seed: 0 00:06:42.628 Vector size: 4096 bytes 00:06:42.628 Transfer size: 8192 bytes 00:06:42.628 Vector count 2 00:06:42.628 Module: software 00:06:42.628 Queue depth: 32 00:06:42.628 Allocate depth: 32 00:06:42.628 # threads/core: 1 00:06:42.628 Run time: 1 seconds 00:06:42.628 Verify: Yes 00:06:42.628 00:06:42.628 Running for 1 seconds... 00:06:42.628 00:06:42.628 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.628 ------------------------------------------------------------------------------------ 00:06:42.628 0,0 206240/s 1611 MiB/s 0 0 00:06:42.628 ==================================================================================== 00:06:42.628 Total 206240/s 805 MiB/s 0 0' 00:06:42.628 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.628 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.628 21:14:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.628 21:14:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.628 21:14:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.628 21:14:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.628 21:14:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.628 21:14:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.628 21:14:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.628 21:14:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.628 21:14:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.628 21:14:06 -- accel/accel.sh@42 -- # jq -r . 00:06:42.887 [2024-11-28 21:14:06.388709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.887 [2024-11-28 21:14:06.388838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68184 ] 00:06:42.887 [2024-11-28 21:14:06.530084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.887 [2024-11-28 21:14:06.568114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val=0x1 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.887 21:14:06 -- accel/accel.sh@21 -- # val=0 00:06:42.887 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.887 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val=software 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val=32 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val=32 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val=1 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val=Yes 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.888 21:14:06 -- accel/accel.sh@21 -- # val= 00:06:42.888 21:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.888 21:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:44.264 21:14:07 -- accel/accel.sh@21 -- # val= 00:06:44.264 21:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.264 21:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:44.264 21:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.264 21:14:07 -- accel/accel.sh@21 -- # val= 00:06:44.264 21:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.264 21:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:44.264 21:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.264 21:14:07 -- accel/accel.sh@21 -- # val= 00:06:44.264 21:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.264 21:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:44.264 21:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.264 21:14:07 -- accel/accel.sh@21 -- # val= 00:06:44.264 21:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.265 21:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:44.265 21:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.265 21:14:07 -- accel/accel.sh@21 -- # val= 00:06:44.265 21:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.265 21:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:44.265 21:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.265 21:14:07 -- accel/accel.sh@21 -- # val= 00:06:44.265 21:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.265 21:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:44.265 21:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.265 21:14:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.265 21:14:07 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:44.265 21:14:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.265 00:06:44.265 real 0m2.638s 00:06:44.265 user 0m2.292s 00:06:44.265 sys 0m0.149s 00:06:44.265 21:14:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.265 ************************************ 00:06:44.265 END TEST accel_copy_crc32c_C2 00:06:44.265 ************************************ 00:06:44.265 21:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.265 21:14:07 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:44.265 21:14:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:44.265 21:14:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.265 21:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.265 ************************************ 00:06:44.265 START TEST accel_dualcast 00:06:44.265 ************************************ 00:06:44.265 21:14:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:44.265 21:14:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.265 21:14:07 -- accel/accel.sh@17 -- # local accel_module 00:06:44.265 21:14:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:44.265 21:14:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:44.265 21:14:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.265 21:14:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.265 21:14:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.265 21:14:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.265 21:14:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.265 21:14:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.265 21:14:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.265 21:14:07 -- accel/accel.sh@42 -- # jq -r . 00:06:44.265 [2024-11-28 21:14:07.762887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.265 [2024-11-28 21:14:07.762974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68215 ] 00:06:44.265 [2024-11-28 21:14:07.898625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.265 [2024-11-28 21:14:07.928259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.641 21:14:09 -- accel/accel.sh@18 -- # out=' 00:06:45.641 SPDK Configuration: 00:06:45.641 Core mask: 0x1 00:06:45.641 00:06:45.641 Accel Perf Configuration: 00:06:45.641 Workload Type: dualcast 00:06:45.641 Transfer size: 4096 bytes 00:06:45.641 Vector count 1 00:06:45.641 Module: software 00:06:45.641 Queue depth: 32 00:06:45.641 Allocate depth: 32 00:06:45.641 # threads/core: 1 00:06:45.641 Run time: 1 seconds 00:06:45.641 Verify: Yes 00:06:45.641 00:06:45.641 Running for 1 seconds... 00:06:45.641 00:06:45.641 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.641 ------------------------------------------------------------------------------------ 00:06:45.641 0,0 394624/s 1541 MiB/s 0 0 00:06:45.641 ==================================================================================== 00:06:45.641 Total 394624/s 1541 MiB/s 0 0' 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.641 21:14:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:45.641 21:14:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.641 21:14:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.641 21:14:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.641 21:14:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.641 21:14:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.641 21:14:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.641 21:14:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.641 21:14:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.641 21:14:09 -- accel/accel.sh@42 -- # jq -r . 00:06:45.641 [2024-11-28 21:14:09.071163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.641 [2024-11-28 21:14:09.071252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68233 ] 00:06:45.641 [2024-11-28 21:14:09.207261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.641 [2024-11-28 21:14:09.238148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.641 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.641 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.641 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.641 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.641 21:14:09 -- accel/accel.sh@21 -- # val=0x1 00:06:45.641 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.641 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.641 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.641 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.641 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.641 21:14:09 -- accel/accel.sh@21 -- # val=dualcast 00:06:45.641 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.641 21:14:09 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.641 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val=software 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val=32 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val=32 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val=1 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val=Yes 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.642 21:14:09 -- accel/accel.sh@21 -- # val= 00:06:45.642 21:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.642 21:14:09 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@21 -- # val= 00:06:47.016 21:14:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@21 -- # val= 00:06:47.016 21:14:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@21 -- # val= 00:06:47.016 21:14:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@21 -- # val= 00:06:47.016 ************************************ 00:06:47.016 END TEST accel_dualcast 00:06:47.016 ************************************ 00:06:47.016 21:14:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@21 -- # val= 00:06:47.016 21:14:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@21 -- # val= 00:06:47.016 21:14:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.016 21:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.016 21:14:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.016 21:14:10 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:47.016 21:14:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.016 00:06:47.016 real 0m2.615s 00:06:47.016 user 0m2.277s 00:06:47.016 sys 0m0.138s 00:06:47.016 21:14:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.016 21:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:47.016 21:14:10 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:47.016 21:14:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.016 21:14:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.016 21:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:47.016 ************************************ 00:06:47.016 START TEST accel_compare 00:06:47.016 ************************************ 00:06:47.016 21:14:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:47.016 21:14:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.016 21:14:10 -- accel/accel.sh@17 -- # local accel_module 00:06:47.016 21:14:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:47.016 21:14:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:47.016 21:14:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.016 21:14:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.016 21:14:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.016 21:14:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.016 21:14:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.016 21:14:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.016 21:14:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.016 21:14:10 -- accel/accel.sh@42 -- # jq -r . 00:06:47.016 [2024-11-28 21:14:10.438774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.016 [2024-11-28 21:14:10.438995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68269 ] 00:06:47.016 [2024-11-28 21:14:10.575244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.016 [2024-11-28 21:14:10.604326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.394 21:14:11 -- accel/accel.sh@18 -- # out=' 00:06:48.394 SPDK Configuration: 00:06:48.394 Core mask: 0x1 00:06:48.394 00:06:48.394 Accel Perf Configuration: 00:06:48.394 Workload Type: compare 00:06:48.394 Transfer size: 4096 bytes 00:06:48.394 Vector count 1 00:06:48.394 Module: software 00:06:48.394 Queue depth: 32 00:06:48.394 Allocate depth: 32 00:06:48.394 # threads/core: 1 00:06:48.394 Run time: 1 seconds 00:06:48.394 Verify: Yes 00:06:48.394 00:06:48.394 Running for 1 seconds... 00:06:48.394 00:06:48.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.394 ------------------------------------------------------------------------------------ 00:06:48.394 0,0 505120/s 1973 MiB/s 0 0 00:06:48.394 ==================================================================================== 00:06:48.394 Total 505120/s 1973 MiB/s 0 0' 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:48.394 21:14:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.394 21:14:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.394 21:14:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.394 21:14:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.394 21:14:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.394 21:14:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.394 21:14:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.394 21:14:11 -- accel/accel.sh@42 -- # jq -r . 00:06:48.394 [2024-11-28 21:14:11.746851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.394 [2024-11-28 21:14:11.746952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68283 ] 00:06:48.394 [2024-11-28 21:14:11.881893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.394 [2024-11-28 21:14:11.910793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=0x1 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=compare 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=software 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=32 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=32 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=1 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val=Yes 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.394 21:14:11 -- accel/accel.sh@21 -- # val= 00:06:48.394 21:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.394 21:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@21 -- # val= 00:06:49.330 21:14:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@21 -- # val= 00:06:49.330 21:14:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@21 -- # val= 00:06:49.330 21:14:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@21 -- # val= 00:06:49.330 21:14:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@21 -- # val= 00:06:49.330 21:14:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@21 -- # val= 00:06:49.330 21:14:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.330 21:14:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.330 21:14:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.330 21:14:13 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:49.330 21:14:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.330 00:06:49.330 real 0m2.616s 00:06:49.330 user 0m2.275s 00:06:49.330 sys 0m0.141s 00:06:49.330 21:14:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.330 21:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:49.330 ************************************ 00:06:49.330 END TEST accel_compare 00:06:49.330 ************************************ 00:06:49.331 21:14:13 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:49.331 21:14:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:49.331 21:14:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.331 21:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:49.590 ************************************ 00:06:49.590 START TEST accel_xor 00:06:49.590 ************************************ 00:06:49.590 21:14:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:49.590 21:14:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.590 21:14:13 -- accel/accel.sh@17 -- # local accel_module 00:06:49.590 21:14:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:49.590 21:14:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.590 21:14:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.590 21:14:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.590 21:14:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.590 21:14:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.590 21:14:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.590 21:14:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.590 21:14:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.590 21:14:13 -- accel/accel.sh@42 -- # jq -r . 00:06:49.590 [2024-11-28 21:14:13.108250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:49.590 [2024-11-28 21:14:13.108380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68318 ] 00:06:49.590 [2024-11-28 21:14:13.252470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.590 [2024-11-28 21:14:13.281594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.967 21:14:14 -- accel/accel.sh@18 -- # out=' 00:06:50.967 SPDK Configuration: 00:06:50.967 Core mask: 0x1 00:06:50.967 00:06:50.967 Accel Perf Configuration: 00:06:50.967 Workload Type: xor 00:06:50.967 Source buffers: 2 00:06:50.967 Transfer size: 4096 bytes 00:06:50.967 Vector count 1 00:06:50.967 Module: software 00:06:50.967 Queue depth: 32 00:06:50.967 Allocate depth: 32 00:06:50.967 # threads/core: 1 00:06:50.967 Run time: 1 seconds 00:06:50.967 Verify: Yes 00:06:50.967 00:06:50.967 Running for 1 seconds... 00:06:50.967 00:06:50.967 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.967 ------------------------------------------------------------------------------------ 00:06:50.967 0,0 272096/s 1062 MiB/s 0 0 00:06:50.967 ==================================================================================== 00:06:50.967 Total 272096/s 1062 MiB/s 0 0' 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:50.967 21:14:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.967 21:14:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.967 21:14:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.967 21:14:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.967 21:14:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.967 21:14:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.967 21:14:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.967 21:14:14 -- accel/accel.sh@42 -- # jq -r . 00:06:50.967 [2024-11-28 21:14:14.416448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.967 [2024-11-28 21:14:14.416538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68336 ] 00:06:50.967 [2024-11-28 21:14:14.553672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.967 [2024-11-28 21:14:14.587163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=0x1 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=xor 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=2 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=software 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=32 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=32 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=1 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val=Yes 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.967 21:14:14 -- accel/accel.sh@21 -- # val= 00:06:50.967 21:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.967 21:14:14 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@21 -- # val= 00:06:52.365 21:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@21 -- # val= 00:06:52.365 21:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@21 -- # val= 00:06:52.365 21:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@21 -- # val= 00:06:52.365 21:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@21 -- # val= 00:06:52.365 21:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@21 -- # val= 00:06:52.365 21:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.365 21:14:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.365 21:14:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.365 21:14:15 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:52.365 21:14:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.365 00:06:52.365 real 0m2.631s 00:06:52.365 user 0m2.276s 00:06:52.365 sys 0m0.153s 00:06:52.365 21:14:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.365 21:14:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.365 ************************************ 00:06:52.365 END TEST accel_xor 00:06:52.365 ************************************ 00:06:52.365 21:14:15 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:52.365 21:14:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.366 21:14:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.366 21:14:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.366 ************************************ 00:06:52.366 START TEST accel_xor 00:06:52.366 ************************************ 00:06:52.366 21:14:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:52.366 21:14:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.366 21:14:15 -- accel/accel.sh@17 -- # local accel_module 00:06:52.366 21:14:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.366 21:14:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.366 21:14:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.366 21:14:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.366 21:14:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.366 21:14:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.366 21:14:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.366 21:14:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.366 21:14:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.366 21:14:15 -- accel/accel.sh@42 -- # jq -r . 00:06:52.366 [2024-11-28 21:14:15.784113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.366 [2024-11-28 21:14:15.784200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68367 ] 00:06:52.366 [2024-11-28 21:14:15.921055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.366 [2024-11-28 21:14:15.952248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.744 21:14:17 -- accel/accel.sh@18 -- # out=' 00:06:53.744 SPDK Configuration: 00:06:53.744 Core mask: 0x1 00:06:53.744 00:06:53.744 Accel Perf Configuration: 00:06:53.744 Workload Type: xor 00:06:53.744 Source buffers: 3 00:06:53.744 Transfer size: 4096 bytes 00:06:53.744 Vector count 1 00:06:53.744 Module: software 00:06:53.744 Queue depth: 32 00:06:53.744 Allocate depth: 32 00:06:53.744 # threads/core: 1 00:06:53.744 Run time: 1 seconds 00:06:53.744 Verify: Yes 00:06:53.744 00:06:53.744 Running for 1 seconds... 00:06:53.744 00:06:53.744 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.744 ------------------------------------------------------------------------------------ 00:06:53.744 0,0 265024/s 1035 MiB/s 0 0 00:06:53.744 ==================================================================================== 00:06:53.744 Total 265024/s 1035 MiB/s 0 0' 00:06:53.744 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.744 21:14:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.744 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.744 21:14:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.744 21:14:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.744 21:14:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.744 21:14:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.744 21:14:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.744 21:14:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.744 21:14:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.744 21:14:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.744 21:14:17 -- accel/accel.sh@42 -- # jq -r . 00:06:53.744 [2024-11-28 21:14:17.094276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.745 [2024-11-28 21:14:17.094367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68387 ] 00:06:53.745 [2024-11-28 21:14:17.226701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.745 [2024-11-28 21:14:17.258511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=0x1 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=xor 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=3 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=software 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=32 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=32 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=1 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val=Yes 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:53.745 21:14:17 -- accel/accel.sh@21 -- # val= 00:06:53.745 21:14:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # IFS=: 00:06:53.745 21:14:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@21 -- # val= 00:06:54.721 21:14:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@21 -- # val= 00:06:54.721 21:14:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@21 -- # val= 00:06:54.721 21:14:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@21 -- # val= 00:06:54.721 ************************************ 00:06:54.721 END TEST accel_xor 00:06:54.721 ************************************ 00:06:54.721 21:14:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@21 -- # val= 00:06:54.721 21:14:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@21 -- # val= 00:06:54.721 21:14:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.721 21:14:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.721 21:14:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.721 21:14:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:54.721 21:14:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.721 00:06:54.721 real 0m2.628s 00:06:54.721 user 0m2.291s 00:06:54.721 sys 0m0.141s 00:06:54.721 21:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.721 21:14:18 -- common/autotest_common.sh@10 -- # set +x 00:06:54.721 21:14:18 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.721 21:14:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:54.721 21:14:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.721 21:14:18 -- common/autotest_common.sh@10 -- # set +x 00:06:54.721 ************************************ 00:06:54.721 START TEST accel_dif_verify 00:06:54.721 ************************************ 00:06:54.721 21:14:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:54.721 21:14:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.721 21:14:18 -- accel/accel.sh@17 -- # local accel_module 00:06:54.721 21:14:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:54.721 21:14:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.721 21:14:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.721 21:14:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.721 21:14:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.721 21:14:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.721 21:14:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.721 21:14:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.721 21:14:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.721 21:14:18 -- accel/accel.sh@42 -- # jq -r . 00:06:54.980 [2024-11-28 21:14:18.471494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.980 [2024-11-28 21:14:18.471660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68416 ] 00:06:54.980 [2024-11-28 21:14:18.608508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.980 [2024-11-28 21:14:18.640955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.359 21:14:19 -- accel/accel.sh@18 -- # out=' 00:06:56.359 SPDK Configuration: 00:06:56.359 Core mask: 0x1 00:06:56.359 00:06:56.359 Accel Perf Configuration: 00:06:56.359 Workload Type: dif_verify 00:06:56.359 Vector size: 4096 bytes 00:06:56.359 Transfer size: 4096 bytes 00:06:56.359 Block size: 512 bytes 00:06:56.359 Metadata size: 8 bytes 00:06:56.359 Vector count 1 00:06:56.359 Module: software 00:06:56.359 Queue depth: 32 00:06:56.359 Allocate depth: 32 00:06:56.359 # threads/core: 1 00:06:56.359 Run time: 1 seconds 00:06:56.359 Verify: No 00:06:56.359 00:06:56.359 Running for 1 seconds... 00:06:56.359 00:06:56.359 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.359 ------------------------------------------------------------------------------------ 00:06:56.359 0,0 112288/s 445 MiB/s 0 0 00:06:56.359 ==================================================================================== 00:06:56.359 Total 112288/s 438 MiB/s 0 0' 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:56.359 21:14:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.359 21:14:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.359 21:14:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.359 21:14:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.359 21:14:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.359 21:14:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.359 21:14:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.359 21:14:19 -- accel/accel.sh@42 -- # jq -r . 00:06:56.359 [2024-11-28 21:14:19.779132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.359 [2024-11-28 21:14:19.779224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68435 ] 00:06:56.359 [2024-11-28 21:14:19.907975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.359 [2024-11-28 21:14:19.938827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=0x1 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=dif_verify 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=software 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=32 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=32 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=1 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val=No 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.359 21:14:19 -- accel/accel.sh@21 -- # val= 00:06:56.359 21:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.359 21:14:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@21 -- # val= 00:06:57.739 21:14:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # IFS=: 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@21 -- # val= 00:06:57.739 21:14:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # IFS=: 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@21 -- # val= 00:06:57.739 21:14:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # IFS=: 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@21 -- # val= 00:06:57.739 21:14:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # IFS=: 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@21 -- # val= 00:06:57.739 21:14:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # IFS=: 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@21 -- # val= 00:06:57.739 21:14:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # IFS=: 00:06:57.739 21:14:21 -- accel/accel.sh@20 -- # read -r var val 00:06:57.739 21:14:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.739 21:14:21 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:57.739 21:14:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.739 00:06:57.739 real 0m2.619s 00:06:57.739 user 0m2.281s 00:06:57.739 sys 0m0.141s 00:06:57.739 21:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.739 ************************************ 00:06:57.740 END TEST accel_dif_verify 00:06:57.740 ************************************ 00:06:57.740 21:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:57.740 21:14:21 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:57.740 21:14:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:57.740 21:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.740 21:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:57.740 ************************************ 00:06:57.740 START TEST accel_dif_generate 00:06:57.740 ************************************ 00:06:57.740 21:14:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:57.740 21:14:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.740 21:14:21 -- accel/accel.sh@17 -- # local accel_module 00:06:57.740 21:14:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:57.740 21:14:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:57.740 21:14:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.740 21:14:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.740 21:14:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.740 21:14:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.740 21:14:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.740 21:14:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.740 21:14:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.740 21:14:21 -- accel/accel.sh@42 -- # jq -r . 00:06:57.740 [2024-11-28 21:14:21.134824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.740 [2024-11-28 21:14:21.134925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68470 ] 00:06:57.740 [2024-11-28 21:14:21.271848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.740 [2024-11-28 21:14:21.301330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.117 21:14:22 -- accel/accel.sh@18 -- # out=' 00:06:59.117 SPDK Configuration: 00:06:59.117 Core mask: 0x1 00:06:59.117 00:06:59.117 Accel Perf Configuration: 00:06:59.117 Workload Type: dif_generate 00:06:59.117 Vector size: 4096 bytes 00:06:59.117 Transfer size: 4096 bytes 00:06:59.117 Block size: 512 bytes 00:06:59.117 Metadata size: 8 bytes 00:06:59.117 Vector count 1 00:06:59.117 Module: software 00:06:59.117 Queue depth: 32 00:06:59.117 Allocate depth: 32 00:06:59.117 # threads/core: 1 00:06:59.117 Run time: 1 seconds 00:06:59.117 Verify: No 00:06:59.117 00:06:59.117 Running for 1 seconds... 00:06:59.117 00:06:59.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.117 ------------------------------------------------------------------------------------ 00:06:59.117 0,0 142816/s 566 MiB/s 0 0 00:06:59.117 ==================================================================================== 00:06:59.117 Total 142816/s 557 MiB/s 0 0' 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.117 21:14:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.117 21:14:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.117 21:14:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.117 21:14:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.117 21:14:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.117 21:14:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.117 21:14:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.117 21:14:22 -- accel/accel.sh@42 -- # jq -r . 00:06:59.117 [2024-11-28 21:14:22.443136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.117 [2024-11-28 21:14:22.443242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68484 ] 00:06:59.117 [2024-11-28 21:14:22.577914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.117 [2024-11-28 21:14:22.607156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=0x1 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=dif_generate 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=software 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=32 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=32 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=1 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val=No 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.117 21:14:22 -- accel/accel.sh@21 -- # val= 00:06:59.117 21:14:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.117 21:14:22 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@21 -- # val= 00:07:00.055 21:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@21 -- # val= 00:07:00.055 21:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@21 -- # val= 00:07:00.055 21:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@21 -- # val= 00:07:00.055 21:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@21 -- # val= 00:07:00.055 21:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@21 -- # val= 00:07:00.055 21:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.055 21:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.055 21:14:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.055 21:14:23 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:00.055 21:14:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.055 00:07:00.055 real 0m2.613s 00:07:00.055 user 0m2.287s 00:07:00.055 sys 0m0.131s 00:07:00.055 21:14:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.055 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:07:00.055 ************************************ 00:07:00.055 END TEST accel_dif_generate 00:07:00.055 ************************************ 00:07:00.055 21:14:23 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:00.055 21:14:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:00.055 21:14:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.055 21:14:23 -- common/autotest_common.sh@10 -- # set +x 00:07:00.055 ************************************ 00:07:00.055 START TEST accel_dif_generate_copy 00:07:00.055 ************************************ 00:07:00.055 21:14:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:00.055 21:14:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.055 21:14:23 -- accel/accel.sh@17 -- # local accel_module 00:07:00.055 21:14:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.055 21:14:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.055 21:14:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.055 21:14:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.055 21:14:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.055 21:14:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.055 21:14:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.055 21:14:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.055 21:14:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.055 21:14:23 -- accel/accel.sh@42 -- # jq -r . 00:07:00.055 [2024-11-28 21:14:23.795204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.055 [2024-11-28 21:14:23.795310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68513 ] 00:07:00.314 [2024-11-28 21:14:23.927776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.314 [2024-11-28 21:14:23.957257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.692 21:14:25 -- accel/accel.sh@18 -- # out=' 00:07:01.692 SPDK Configuration: 00:07:01.692 Core mask: 0x1 00:07:01.692 00:07:01.692 Accel Perf Configuration: 00:07:01.692 Workload Type: dif_generate_copy 00:07:01.692 Vector size: 4096 bytes 00:07:01.692 Transfer size: 4096 bytes 00:07:01.692 Vector count 1 00:07:01.692 Module: software 00:07:01.692 Queue depth: 32 00:07:01.692 Allocate depth: 32 00:07:01.692 # threads/core: 1 00:07:01.692 Run time: 1 seconds 00:07:01.692 Verify: No 00:07:01.692 00:07:01.692 Running for 1 seconds... 00:07:01.692 00:07:01.692 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.692 ------------------------------------------------------------------------------------ 00:07:01.692 0,0 106528/s 422 MiB/s 0 0 00:07:01.692 ==================================================================================== 00:07:01.692 Total 106528/s 416 MiB/s 0 0' 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:01.692 21:14:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.692 21:14:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.692 21:14:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.692 21:14:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.692 21:14:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.692 21:14:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.692 21:14:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.692 21:14:25 -- accel/accel.sh@42 -- # jq -r . 00:07:01.692 [2024-11-28 21:14:25.103614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.692 [2024-11-28 21:14:25.103717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68532 ] 00:07:01.692 [2024-11-28 21:14:25.239260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.692 [2024-11-28 21:14:25.268665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=0x1 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=software 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=32 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=32 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=1 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val=No 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:01.692 21:14:25 -- accel/accel.sh@21 -- # val= 00:07:01.692 21:14:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # IFS=: 00:07:01.692 21:14:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.100 21:14:26 -- accel/accel.sh@21 -- # val= 00:07:03.100 21:14:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.100 21:14:26 -- accel/accel.sh@21 -- # val= 00:07:03.100 21:14:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.100 21:14:26 -- accel/accel.sh@21 -- # val= 00:07:03.100 21:14:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.100 21:14:26 -- accel/accel.sh@21 -- # val= 00:07:03.100 21:14:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.100 21:14:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.100 21:14:26 -- accel/accel.sh@21 -- # val= 00:07:03.101 21:14:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.101 21:14:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.101 21:14:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.101 21:14:26 -- accel/accel.sh@21 -- # val= 00:07:03.101 21:14:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.101 21:14:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.101 21:14:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.101 21:14:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.101 21:14:26 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:03.101 21:14:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.101 00:07:03.101 real 0m2.614s 00:07:03.101 user 0m2.272s 00:07:03.101 sys 0m0.144s 00:07:03.101 21:14:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.101 21:14:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.101 ************************************ 00:07:03.101 END TEST accel_dif_generate_copy 00:07:03.101 ************************************ 00:07:03.101 21:14:26 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:03.101 21:14:26 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.101 21:14:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:03.101 21:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.101 21:14:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.101 ************************************ 00:07:03.101 START TEST accel_comp 00:07:03.101 ************************************ 00:07:03.101 21:14:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.101 21:14:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.101 21:14:26 -- accel/accel.sh@17 -- # local accel_module 00:07:03.101 21:14:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.101 21:14:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.101 21:14:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.101 21:14:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.101 21:14:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.101 21:14:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.101 21:14:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.101 21:14:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.101 21:14:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.101 21:14:26 -- accel/accel.sh@42 -- # jq -r . 00:07:03.101 [2024-11-28 21:14:26.458590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.101 [2024-11-28 21:14:26.458679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68567 ] 00:07:03.101 [2024-11-28 21:14:26.590713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.101 [2024-11-28 21:14:26.620751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.036 21:14:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.036 00:07:04.036 SPDK Configuration: 00:07:04.036 Core mask: 0x1 00:07:04.036 00:07:04.036 Accel Perf Configuration: 00:07:04.036 Workload Type: compress 00:07:04.036 Transfer size: 4096 bytes 00:07:04.036 Vector count 1 00:07:04.036 Module: software 00:07:04.036 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.036 Queue depth: 32 00:07:04.036 Allocate depth: 32 00:07:04.036 # threads/core: 1 00:07:04.036 Run time: 1 seconds 00:07:04.036 Verify: No 00:07:04.036 00:07:04.036 Running for 1 seconds... 00:07:04.036 00:07:04.036 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.036 ------------------------------------------------------------------------------------ 00:07:04.036 0,0 54592/s 227 MiB/s 0 0 00:07:04.036 ==================================================================================== 00:07:04.036 Total 54592/s 213 MiB/s 0 0' 00:07:04.036 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.036 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.036 21:14:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.036 21:14:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.036 21:14:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.036 21:14:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.036 21:14:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.036 21:14:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.036 21:14:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.036 21:14:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.036 21:14:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.036 21:14:27 -- accel/accel.sh@42 -- # jq -r . 00:07:04.036 [2024-11-28 21:14:27.760218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.036 [2024-11-28 21:14:27.760318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68581 ] 00:07:04.294 [2024-11-28 21:14:27.895894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.294 [2024-11-28 21:14:27.925161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val=0x1 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val=compress 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.294 21:14:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.294 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.294 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val=software 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val=32 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val=32 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val=1 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val=No 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:04.295 21:14:27 -- accel/accel.sh@21 -- # val= 00:07:04.295 21:14:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # IFS=: 00:07:04.295 21:14:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@21 -- # val= 00:07:05.670 21:14:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@21 -- # val= 00:07:05.670 21:14:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@21 -- # val= 00:07:05.670 21:14:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@21 -- # val= 00:07:05.670 21:14:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@21 -- # val= 00:07:05.670 21:14:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@21 -- # val= 00:07:05.670 21:14:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # IFS=: 00:07:05.670 21:14:29 -- accel/accel.sh@20 -- # read -r var val 00:07:05.670 21:14:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.670 21:14:29 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:05.670 21:14:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.670 00:07:05.670 real 0m2.608s 00:07:05.670 user 0m2.278s 00:07:05.670 sys 0m0.134s 00:07:05.670 21:14:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.670 ************************************ 00:07:05.670 END TEST accel_comp 00:07:05.670 ************************************ 00:07:05.670 21:14:29 -- common/autotest_common.sh@10 -- # set +x 00:07:05.670 21:14:29 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.670 21:14:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:05.670 21:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.670 21:14:29 -- common/autotest_common.sh@10 -- # set +x 00:07:05.670 ************************************ 00:07:05.670 START TEST accel_decomp 00:07:05.670 ************************************ 00:07:05.670 21:14:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.670 21:14:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.670 21:14:29 -- accel/accel.sh@17 -- # local accel_module 00:07:05.670 21:14:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.670 21:14:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.670 21:14:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.670 21:14:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.670 21:14:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.670 21:14:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.670 21:14:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.670 21:14:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.670 21:14:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.670 21:14:29 -- accel/accel.sh@42 -- # jq -r . 00:07:05.670 [2024-11-28 21:14:29.112730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.670 [2024-11-28 21:14:29.112824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68615 ] 00:07:05.670 [2024-11-28 21:14:29.249681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.670 [2024-11-28 21:14:29.280604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.048 21:14:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.048 00:07:07.048 SPDK Configuration: 00:07:07.048 Core mask: 0x1 00:07:07.048 00:07:07.048 Accel Perf Configuration: 00:07:07.048 Workload Type: decompress 00:07:07.048 Transfer size: 4096 bytes 00:07:07.049 Vector count 1 00:07:07.049 Module: software 00:07:07.049 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.049 Queue depth: 32 00:07:07.049 Allocate depth: 32 00:07:07.049 # threads/core: 1 00:07:07.049 Run time: 1 seconds 00:07:07.049 Verify: Yes 00:07:07.049 00:07:07.049 Running for 1 seconds... 00:07:07.049 00:07:07.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.049 ------------------------------------------------------------------------------------ 00:07:07.049 0,0 79072/s 145 MiB/s 0 0 00:07:07.049 ==================================================================================== 00:07:07.049 Total 79072/s 308 MiB/s 0 0' 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.049 21:14:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.049 21:14:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.049 21:14:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.049 21:14:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.049 21:14:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.049 21:14:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.049 21:14:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.049 21:14:30 -- accel/accel.sh@42 -- # jq -r . 00:07:07.049 [2024-11-28 21:14:30.416951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.049 [2024-11-28 21:14:30.417069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68635 ] 00:07:07.049 [2024-11-28 21:14:30.552558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.049 [2024-11-28 21:14:30.585310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=0x1 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=decompress 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=software 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=32 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=32 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=1 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val=Yes 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:07.049 21:14:30 -- accel/accel.sh@21 -- # val= 00:07:07.049 21:14:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # IFS=: 00:07:07.049 21:14:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@21 -- # val= 00:07:08.002 21:14:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@21 -- # val= 00:07:08.002 21:14:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@21 -- # val= 00:07:08.002 21:14:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@21 -- # val= 00:07:08.002 21:14:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@21 -- # val= 00:07:08.002 21:14:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@21 -- # val= 00:07:08.002 21:14:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.002 21:14:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.002 21:14:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.002 21:14:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:08.002 21:14:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.002 00:07:08.002 real 0m2.612s 00:07:08.002 user 0m1.150s 00:07:08.002 sys 0m0.069s 00:07:08.002 21:14:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.002 21:14:31 -- common/autotest_common.sh@10 -- # set +x 00:07:08.002 ************************************ 00:07:08.002 END TEST accel_decomp 00:07:08.002 ************************************ 00:07:08.002 21:14:31 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:08.002 21:14:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:08.002 21:14:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.002 21:14:31 -- common/autotest_common.sh@10 -- # set +x 00:07:08.262 ************************************ 00:07:08.262 START TEST accel_decmop_full 00:07:08.262 ************************************ 00:07:08.262 21:14:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:08.262 21:14:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.262 21:14:31 -- accel/accel.sh@17 -- # local accel_module 00:07:08.262 21:14:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:08.262 21:14:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:08.262 21:14:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.262 21:14:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.262 21:14:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.262 21:14:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.262 21:14:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.262 21:14:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.262 21:14:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.262 21:14:31 -- accel/accel.sh@42 -- # jq -r . 00:07:08.262 [2024-11-28 21:14:31.776934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.262 [2024-11-28 21:14:31.777038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68664 ] 00:07:08.262 [2024-11-28 21:14:31.905201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.262 [2024-11-28 21:14:31.935018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.641 21:14:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.641 00:07:09.641 SPDK Configuration: 00:07:09.641 Core mask: 0x1 00:07:09.641 00:07:09.641 Accel Perf Configuration: 00:07:09.641 Workload Type: decompress 00:07:09.641 Transfer size: 111250 bytes 00:07:09.641 Vector count 1 00:07:09.641 Module: software 00:07:09.641 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.641 Queue depth: 32 00:07:09.641 Allocate depth: 32 00:07:09.641 # threads/core: 1 00:07:09.641 Run time: 1 seconds 00:07:09.641 Verify: Yes 00:07:09.641 00:07:09.641 Running for 1 seconds... 00:07:09.641 00:07:09.641 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.641 ------------------------------------------------------------------------------------ 00:07:09.641 0,0 5248/s 216 MiB/s 0 0 00:07:09.641 ==================================================================================== 00:07:09.641 Total 5248/s 556 MiB/s 0 0' 00:07:09.641 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.641 21:14:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.641 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.641 21:14:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.641 21:14:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.641 21:14:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.641 21:14:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.641 21:14:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.641 21:14:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.641 21:14:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.641 21:14:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.641 21:14:33 -- accel/accel.sh@42 -- # jq -r . 00:07:09.641 [2024-11-28 21:14:33.080644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.642 [2024-11-28 21:14:33.080746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68678 ] 00:07:09.642 [2024-11-28 21:14:33.214667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.642 [2024-11-28 21:14:33.244637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=0x1 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=decompress 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=software 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=32 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=32 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=1 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val=Yes 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:09.642 21:14:33 -- accel/accel.sh@21 -- # val= 00:07:09.642 21:14:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # IFS=: 00:07:09.642 21:14:33 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@21 -- # val= 00:07:11.024 21:14:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@21 -- # val= 00:07:11.024 21:14:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@21 -- # val= 00:07:11.024 21:14:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@21 -- # val= 00:07:11.024 21:14:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@21 -- # val= 00:07:11.024 21:14:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@21 -- # val= 00:07:11.024 21:14:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.024 21:14:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.024 21:14:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.024 21:14:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.024 21:14:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.024 00:07:11.024 real 0m2.632s 00:07:11.024 user 0m2.292s 00:07:11.024 sys 0m0.140s 00:07:11.024 21:14:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.024 ************************************ 00:07:11.024 END TEST accel_decmop_full 00:07:11.024 ************************************ 00:07:11.024 21:14:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.024 21:14:34 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.024 21:14:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:11.024 21:14:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.024 21:14:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.024 ************************************ 00:07:11.024 START TEST accel_decomp_mcore 00:07:11.024 ************************************ 00:07:11.024 21:14:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.024 21:14:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.024 21:14:34 -- accel/accel.sh@17 -- # local accel_module 00:07:11.024 21:14:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.024 21:14:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.024 21:14:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.024 21:14:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.024 21:14:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.024 21:14:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.024 21:14:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.024 21:14:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.024 21:14:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.024 21:14:34 -- accel/accel.sh@42 -- # jq -r . 00:07:11.024 [2024-11-28 21:14:34.456738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.024 [2024-11-28 21:14:34.456848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68718 ] 00:07:11.024 [2024-11-28 21:14:34.591230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.024 [2024-11-28 21:14:34.622935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.025 [2024-11-28 21:14:34.623064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.025 [2024-11-28 21:14:34.623205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.025 [2024-11-28 21:14:34.623208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.403 21:14:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.403 00:07:12.403 SPDK Configuration: 00:07:12.403 Core mask: 0xf 00:07:12.403 00:07:12.403 Accel Perf Configuration: 00:07:12.403 Workload Type: decompress 00:07:12.403 Transfer size: 4096 bytes 00:07:12.403 Vector count 1 00:07:12.403 Module: software 00:07:12.403 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.403 Queue depth: 32 00:07:12.403 Allocate depth: 32 00:07:12.403 # threads/core: 1 00:07:12.403 Run time: 1 seconds 00:07:12.403 Verify: Yes 00:07:12.403 00:07:12.403 Running for 1 seconds... 00:07:12.403 00:07:12.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.403 ------------------------------------------------------------------------------------ 00:07:12.403 0,0 64416/s 118 MiB/s 0 0 00:07:12.403 3,0 62720/s 115 MiB/s 0 0 00:07:12.403 2,0 60384/s 111 MiB/s 0 0 00:07:12.403 1,0 61728/s 113 MiB/s 0 0 00:07:12.403 ==================================================================================== 00:07:12.403 Total 249248/s 973 MiB/s 0 0' 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.403 21:14:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.403 21:14:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.403 21:14:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.403 21:14:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.403 21:14:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.403 21:14:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.403 21:14:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.403 21:14:35 -- accel/accel.sh@42 -- # jq -r . 00:07:12.403 [2024-11-28 21:14:35.778895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.403 [2024-11-28 21:14:35.779026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68735 ] 00:07:12.403 [2024-11-28 21:14:35.915387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.403 [2024-11-28 21:14:35.946859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.403 [2024-11-28 21:14:35.947027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.403 [2024-11-28 21:14:35.947134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.403 [2024-11-28 21:14:35.947134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=0xf 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=decompress 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=software 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=32 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=32 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 21:14:35 -- accel/accel.sh@21 -- # val=1 00:07:12.403 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.404 21:14:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.404 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.404 21:14:35 -- accel/accel.sh@21 -- # val=Yes 00:07:12.404 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.404 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.404 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.404 21:14:35 -- accel/accel.sh@21 -- # val= 00:07:12.404 21:14:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.404 21:14:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@21 -- # val= 00:07:13.341 21:14:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.341 21:14:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.341 21:14:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.341 21:14:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:13.341 21:14:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.341 00:07:13.341 real 0m2.644s 00:07:13.341 user 0m8.686s 00:07:13.341 sys 0m0.193s 00:07:13.341 21:14:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.341 ************************************ 00:07:13.341 END TEST accel_decomp_mcore 00:07:13.341 ************************************ 00:07:13.341 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 21:14:37 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.601 21:14:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:13.601 21:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.601 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 ************************************ 00:07:13.601 START TEST accel_decomp_full_mcore 00:07:13.601 ************************************ 00:07:13.601 21:14:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.601 21:14:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.601 21:14:37 -- accel/accel.sh@17 -- # local accel_module 00:07:13.601 21:14:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.601 21:14:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.601 21:14:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.601 21:14:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.601 21:14:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.601 21:14:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.601 21:14:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.601 21:14:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.601 21:14:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.601 21:14:37 -- accel/accel.sh@42 -- # jq -r . 00:07:13.601 [2024-11-28 21:14:37.145486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.601 [2024-11-28 21:14:37.145596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68767 ] 00:07:13.601 [2024-11-28 21:14:37.277574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.601 [2024-11-28 21:14:37.309441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.601 [2024-11-28 21:14:37.309548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.601 [2024-11-28 21:14:37.309683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.601 [2024-11-28 21:14:37.309688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.980 21:14:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.980 00:07:14.980 SPDK Configuration: 00:07:14.980 Core mask: 0xf 00:07:14.980 00:07:14.980 Accel Perf Configuration: 00:07:14.980 Workload Type: decompress 00:07:14.980 Transfer size: 111250 bytes 00:07:14.980 Vector count 1 00:07:14.980 Module: software 00:07:14.980 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.980 Queue depth: 32 00:07:14.980 Allocate depth: 32 00:07:14.980 # threads/core: 1 00:07:14.980 Run time: 1 seconds 00:07:14.980 Verify: Yes 00:07:14.980 00:07:14.980 Running for 1 seconds... 00:07:14.980 00:07:14.980 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.980 ------------------------------------------------------------------------------------ 00:07:14.980 0,0 4832/s 199 MiB/s 0 0 00:07:14.980 3,0 4832/s 199 MiB/s 0 0 00:07:14.980 2,0 4800/s 198 MiB/s 0 0 00:07:14.980 1,0 4800/s 198 MiB/s 0 0 00:07:14.980 ==================================================================================== 00:07:14.980 Total 19264/s 2043 MiB/s 0 0' 00:07:14.980 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.980 21:14:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.980 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.980 21:14:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.980 21:14:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.980 21:14:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.980 21:14:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.980 21:14:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.980 21:14:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.980 21:14:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.980 21:14:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.980 21:14:38 -- accel/accel.sh@42 -- # jq -r . 00:07:14.980 [2024-11-28 21:14:38.476105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.980 [2024-11-28 21:14:38.476202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68790 ] 00:07:14.980 [2024-11-28 21:14:38.609451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.980 [2024-11-28 21:14:38.642582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.981 [2024-11-28 21:14:38.642717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.981 [2024-11-28 21:14:38.642849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.981 [2024-11-28 21:14:38.642853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=0xf 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=decompress 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=software 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=32 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=32 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=1 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val=Yes 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.981 21:14:38 -- accel/accel.sh@21 -- # val= 00:07:14.981 21:14:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.981 21:14:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.358 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.358 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.358 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.359 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.359 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.359 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.359 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.359 21:14:39 -- accel/accel.sh@21 -- # val= 00:07:16.359 21:14:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.359 21:14:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.359 21:14:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.359 21:14:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.359 21:14:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.359 21:14:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.359 00:07:16.359 real 0m2.670s 00:07:16.359 user 0m8.795s 00:07:16.359 sys 0m0.182s 00:07:16.359 21:14:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.359 21:14:39 -- common/autotest_common.sh@10 -- # set +x 00:07:16.359 ************************************ 00:07:16.359 END TEST accel_decomp_full_mcore 00:07:16.359 ************************************ 00:07:16.359 21:14:39 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.359 21:14:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:16.359 21:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.359 21:14:39 -- common/autotest_common.sh@10 -- # set +x 00:07:16.359 ************************************ 00:07:16.359 START TEST accel_decomp_mthread 00:07:16.359 ************************************ 00:07:16.359 21:14:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.359 21:14:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.359 21:14:39 -- accel/accel.sh@17 -- # local accel_module 00:07:16.359 21:14:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.359 21:14:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.359 21:14:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.359 21:14:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.359 21:14:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.359 21:14:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.359 21:14:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.359 21:14:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.359 21:14:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.359 21:14:39 -- accel/accel.sh@42 -- # jq -r . 00:07:16.359 [2024-11-28 21:14:39.857716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.359 [2024-11-28 21:14:39.857847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68827 ] 00:07:16.359 [2024-11-28 21:14:39.988804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.359 [2024-11-28 21:14:40.019605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.735 21:14:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.735 00:07:17.735 SPDK Configuration: 00:07:17.735 Core mask: 0x1 00:07:17.735 00:07:17.735 Accel Perf Configuration: 00:07:17.735 Workload Type: decompress 00:07:17.735 Transfer size: 4096 bytes 00:07:17.735 Vector count 1 00:07:17.735 Module: software 00:07:17.735 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.735 Queue depth: 32 00:07:17.735 Allocate depth: 32 00:07:17.735 # threads/core: 2 00:07:17.735 Run time: 1 seconds 00:07:17.735 Verify: Yes 00:07:17.735 00:07:17.735 Running for 1 seconds... 00:07:17.735 00:07:17.735 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.735 ------------------------------------------------------------------------------------ 00:07:17.735 0,1 40480/s 74 MiB/s 0 0 00:07:17.735 0,0 40384/s 74 MiB/s 0 0 00:07:17.735 ==================================================================================== 00:07:17.735 Total 80864/s 315 MiB/s 0 0' 00:07:17.735 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.735 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.735 21:14:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:17.735 21:14:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:17.735 21:14:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.735 21:14:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.735 21:14:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.735 21:14:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.735 21:14:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.735 21:14:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.735 21:14:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.735 21:14:41 -- accel/accel.sh@42 -- # jq -r . 00:07:17.735 [2024-11-28 21:14:41.166146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.735 [2024-11-28 21:14:41.166243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68841 ] 00:07:17.735 [2024-11-28 21:14:41.293608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.735 [2024-11-28 21:14:41.323507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.735 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.735 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.735 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.735 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.735 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.735 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.735 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.735 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.735 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.735 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=0x1 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=decompress 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=software 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=32 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=32 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=2 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val=Yes 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:17.736 21:14:41 -- accel/accel.sh@21 -- # val= 00:07:17.736 21:14:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # IFS=: 00:07:17.736 21:14:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@21 -- # val= 00:07:19.128 21:14:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 21:14:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 21:14:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.128 21:14:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:19.128 21:14:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.128 00:07:19.128 real 0m2.611s 00:07:19.128 user 0m2.275s 00:07:19.128 sys 0m0.135s 00:07:19.128 21:14:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.128 21:14:42 -- common/autotest_common.sh@10 -- # set +x 00:07:19.128 ************************************ 00:07:19.128 END TEST accel_decomp_mthread 00:07:19.128 ************************************ 00:07:19.128 21:14:42 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.128 21:14:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:19.128 21:14:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.128 21:14:42 -- common/autotest_common.sh@10 -- # set +x 00:07:19.128 ************************************ 00:07:19.128 START TEST accel_deomp_full_mthread 00:07:19.128 ************************************ 00:07:19.129 21:14:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.129 21:14:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.129 21:14:42 -- accel/accel.sh@17 -- # local accel_module 00:07:19.129 21:14:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.129 21:14:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.129 21:14:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.129 21:14:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.129 21:14:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.129 21:14:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.129 21:14:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.129 21:14:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.129 21:14:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.129 21:14:42 -- accel/accel.sh@42 -- # jq -r . 00:07:19.129 [2024-11-28 21:14:42.512601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.129 [2024-11-28 21:14:42.512696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68876 ] 00:07:19.129 [2024-11-28 21:14:42.641082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.129 [2024-11-28 21:14:42.670870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.101 21:14:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.101 00:07:20.101 SPDK Configuration: 00:07:20.101 Core mask: 0x1 00:07:20.101 00:07:20.101 Accel Perf Configuration: 00:07:20.101 Workload Type: decompress 00:07:20.101 Transfer size: 111250 bytes 00:07:20.101 Vector count 1 00:07:20.101 Module: software 00:07:20.101 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.101 Queue depth: 32 00:07:20.101 Allocate depth: 32 00:07:20.101 # threads/core: 2 00:07:20.101 Run time: 1 seconds 00:07:20.101 Verify: Yes 00:07:20.101 00:07:20.101 Running for 1 seconds... 00:07:20.101 00:07:20.101 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.101 ------------------------------------------------------------------------------------ 00:07:20.101 0,1 2688/s 111 MiB/s 0 0 00:07:20.101 0,0 2688/s 111 MiB/s 0 0 00:07:20.101 ==================================================================================== 00:07:20.101 Total 5376/s 570 MiB/s 0 0' 00:07:20.101 21:14:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.101 21:14:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.101 21:14:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.101 21:14:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.101 21:14:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.101 21:14:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.101 21:14:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.101 21:14:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.101 21:14:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.101 21:14:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.101 21:14:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.101 21:14:43 -- accel/accel.sh@42 -- # jq -r . 00:07:20.101 [2024-11-28 21:14:43.840387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.101 [2024-11-28 21:14:43.840500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68895 ] 00:07:20.361 [2024-11-28 21:14:43.974666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.361 [2024-11-28 21:14:44.004321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=0x1 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=decompress 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=software 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=32 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=32 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=2 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val=Yes 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.361 21:14:44 -- accel/accel.sh@21 -- # val= 00:07:20.361 21:14:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.361 21:14:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.738 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.738 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.738 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.738 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.738 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.738 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.738 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.738 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.738 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.738 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.738 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.738 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.738 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.738 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.739 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.739 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.739 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.739 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.739 21:14:45 -- accel/accel.sh@21 -- # val= 00:07:21.739 21:14:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # IFS=: 00:07:21.739 21:14:45 -- accel/accel.sh@20 -- # read -r var val 00:07:21.739 21:14:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.739 21:14:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.739 21:14:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.739 00:07:21.739 real 0m2.653s 00:07:21.739 user 0m2.317s 00:07:21.739 sys 0m0.139s 00:07:21.739 21:14:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.739 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:07:21.739 ************************************ 00:07:21.739 END TEST accel_deomp_full_mthread 00:07:21.739 ************************************ 00:07:21.739 21:14:45 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:21.739 21:14:45 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.739 21:14:45 -- accel/accel.sh@129 -- # build_accel_config 00:07:21.739 21:14:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:21.739 21:14:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.739 21:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.739 21:14:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.739 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:07:21.739 21:14:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.739 21:14:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.739 21:14:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.739 21:14:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.739 21:14:45 -- accel/accel.sh@42 -- # jq -r . 00:07:21.739 ************************************ 00:07:21.739 START TEST accel_dif_functional_tests 00:07:21.739 ************************************ 00:07:21.739 21:14:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.739 [2024-11-28 21:14:45.236817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.739 [2024-11-28 21:14:45.236914] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68925 ] 00:07:21.739 [2024-11-28 21:14:45.367180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.739 [2024-11-28 21:14:45.398384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.739 [2024-11-28 21:14:45.398519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.739 [2024-11-28 21:14:45.398536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.739 00:07:21.739 00:07:21.739 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.739 http://cunit.sourceforge.net/ 00:07:21.739 00:07:21.739 00:07:21.739 Suite: accel_dif 00:07:21.739 Test: verify: DIF generated, GUARD check ...passed 00:07:21.739 Test: verify: DIF generated, APPTAG check ...passed 00:07:21.739 Test: verify: DIF generated, REFTAG check ...passed 00:07:21.739 Test: verify: DIF not generated, GUARD check ...passed 00:07:21.739 Test: verify: DIF not generated, APPTAG check ...passed 00:07:21.739 Test: verify: DIF not generated, REFTAG check ...[2024-11-28 21:14:45.442849] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.739 [2024-11-28 21:14:45.442929] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.739 [2024-11-28 21:14:45.442994] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.739 [2024-11-28 21:14:45.443034] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.739 passed 00:07:21.739 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:21.739 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:21.739 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:21.739 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:21.739 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-11-28 21:14:45.443058] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.739 [2024-11-28 21:14:45.443082] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.739 [2024-11-28 21:14:45.443133] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:21.739 passed 00:07:21.739 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:21.739 Test: generate copy: DIF generated, GUARD check ...passed 00:07:21.739 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:21.739 Test: generate copy: DIF generated, REFTAG check ...[2024-11-28 21:14:45.443268] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:21.739 passed 00:07:21.739 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:21.739 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:21.739 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:21.739 Test: generate copy: iovecs-len validate ...[2024-11-28 21:14:45.443520] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:21.739 passed 00:07:21.739 Test: generate copy: buffer alignment validate ...passed 00:07:21.739 00:07:21.739 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.739 suites 1 1 n/a 0 0 00:07:21.739 tests 20 20 20 0 0 00:07:21.739 asserts 204 204 204 0 n/a 00:07:21.739 00:07:21.739 Elapsed time = 0.002 seconds 00:07:21.998 00:07:21.998 real 0m0.366s 00:07:21.998 user 0m0.424s 00:07:21.998 sys 0m0.087s 00:07:21.998 21:14:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.998 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:07:21.998 ************************************ 00:07:21.998 END TEST accel_dif_functional_tests 00:07:21.998 ************************************ 00:07:21.998 00:07:21.998 real 0m56.329s 00:07:21.998 user 1m1.715s 00:07:21.998 sys 0m4.147s 00:07:21.998 21:14:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.998 ************************************ 00:07:21.998 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:07:21.998 END TEST accel 00:07:21.998 ************************************ 00:07:21.998 21:14:45 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.999 21:14:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.999 21:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.999 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:07:21.999 ************************************ 00:07:21.999 START TEST accel_rpc 00:07:21.999 ************************************ 00:07:21.999 21:14:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.999 * Looking for test storage... 00:07:21.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:21.999 21:14:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:21.999 21:14:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:21.999 21:14:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.258 21:14:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.258 21:14:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.258 21:14:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.258 21:14:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.258 21:14:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.258 21:14:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.258 21:14:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.258 21:14:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.258 21:14:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.258 21:14:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.258 21:14:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.258 21:14:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.258 21:14:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.258 21:14:45 -- scripts/common.sh@344 -- # : 1 00:07:22.258 21:14:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.258 21:14:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.258 21:14:45 -- scripts/common.sh@364 -- # decimal 1 00:07:22.258 21:14:45 -- scripts/common.sh@352 -- # local d=1 00:07:22.258 21:14:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.258 21:14:45 -- scripts/common.sh@354 -- # echo 1 00:07:22.258 21:14:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.258 21:14:45 -- scripts/common.sh@365 -- # decimal 2 00:07:22.258 21:14:45 -- scripts/common.sh@352 -- # local d=2 00:07:22.258 21:14:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.258 21:14:45 -- scripts/common.sh@354 -- # echo 2 00:07:22.258 21:14:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.258 21:14:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.258 21:14:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.258 21:14:45 -- scripts/common.sh@367 -- # return 0 00:07:22.258 21:14:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.258 21:14:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.258 --rc genhtml_branch_coverage=1 00:07:22.258 --rc genhtml_function_coverage=1 00:07:22.258 --rc genhtml_legend=1 00:07:22.258 --rc geninfo_all_blocks=1 00:07:22.258 --rc geninfo_unexecuted_blocks=1 00:07:22.258 00:07:22.258 ' 00:07:22.258 21:14:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.258 --rc genhtml_branch_coverage=1 00:07:22.258 --rc genhtml_function_coverage=1 00:07:22.258 --rc genhtml_legend=1 00:07:22.258 --rc geninfo_all_blocks=1 00:07:22.258 --rc geninfo_unexecuted_blocks=1 00:07:22.258 00:07:22.258 ' 00:07:22.258 21:14:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.258 --rc genhtml_branch_coverage=1 00:07:22.258 --rc genhtml_function_coverage=1 00:07:22.258 --rc genhtml_legend=1 00:07:22.258 --rc geninfo_all_blocks=1 00:07:22.259 --rc geninfo_unexecuted_blocks=1 00:07:22.259 00:07:22.259 ' 00:07:22.259 21:14:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.259 --rc genhtml_branch_coverage=1 00:07:22.259 --rc genhtml_function_coverage=1 00:07:22.259 --rc genhtml_legend=1 00:07:22.259 --rc geninfo_all_blocks=1 00:07:22.259 --rc geninfo_unexecuted_blocks=1 00:07:22.259 00:07:22.259 ' 00:07:22.259 21:14:45 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.259 21:14:45 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68997 00:07:22.259 21:14:45 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:22.259 21:14:45 -- accel/accel_rpc.sh@15 -- # waitforlisten 68997 00:07:22.259 21:14:45 -- common/autotest_common.sh@829 -- # '[' -z 68997 ']' 00:07:22.259 21:14:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.259 21:14:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.259 21:14:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.259 21:14:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.259 21:14:45 -- common/autotest_common.sh@10 -- # set +x 00:07:22.259 [2024-11-28 21:14:45.891814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.259 [2024-11-28 21:14:45.891911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68997 ] 00:07:22.518 [2024-11-28 21:14:46.029115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.518 [2024-11-28 21:14:46.059821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.518 [2024-11-28 21:14:46.060023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.518 21:14:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.518 21:14:46 -- common/autotest_common.sh@862 -- # return 0 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:22.518 21:14:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.518 21:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.518 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.518 ************************************ 00:07:22.518 START TEST accel_assign_opcode 00:07:22.518 ************************************ 00:07:22.518 21:14:46 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:22.518 21:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.518 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.518 [2024-11-28 21:14:46.148418] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:22.518 21:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:22.518 21:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.518 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.518 [2024-11-28 21:14:46.156458] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:22.518 21:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.518 21:14:46 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:22.518 21:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.518 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.778 21:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.778 21:14:46 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:22.778 21:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.778 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.778 21:14:46 -- accel/accel_rpc.sh@42 -- # grep software 00:07:22.778 21:14:46 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:22.778 21:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.778 software 00:07:22.778 00:07:22.778 real 0m0.182s 00:07:22.778 user 0m0.049s 00:07:22.778 sys 0m0.014s 00:07:22.778 21:14:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.778 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.778 ************************************ 00:07:22.778 END TEST accel_assign_opcode 00:07:22.778 ************************************ 00:07:22.778 21:14:46 -- accel/accel_rpc.sh@55 -- # killprocess 68997 00:07:22.778 21:14:46 -- common/autotest_common.sh@936 -- # '[' -z 68997 ']' 00:07:22.778 21:14:46 -- common/autotest_common.sh@940 -- # kill -0 68997 00:07:22.778 21:14:46 -- common/autotest_common.sh@941 -- # uname 00:07:22.778 21:14:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.778 21:14:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68997 00:07:22.778 21:14:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.778 21:14:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.778 killing process with pid 68997 00:07:22.778 21:14:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68997' 00:07:22.778 21:14:46 -- common/autotest_common.sh@955 -- # kill 68997 00:07:22.778 21:14:46 -- common/autotest_common.sh@960 -- # wait 68997 00:07:23.037 00:07:23.037 real 0m0.955s 00:07:23.037 user 0m0.984s 00:07:23.037 sys 0m0.303s 00:07:23.037 21:14:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.037 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.037 ************************************ 00:07:23.037 END TEST accel_rpc 00:07:23.037 ************************************ 00:07:23.037 21:14:46 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.037 21:14:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.037 21:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.037 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.037 ************************************ 00:07:23.037 START TEST app_cmdline 00:07:23.037 ************************************ 00:07:23.037 21:14:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.037 * Looking for test storage... 00:07:23.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.037 21:14:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.037 21:14:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.037 21:14:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.296 21:14:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.296 21:14:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.296 21:14:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.296 21:14:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.296 21:14:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.296 21:14:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.297 21:14:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.297 21:14:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.297 21:14:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.297 21:14:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.297 21:14:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.297 21:14:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.297 21:14:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.297 21:14:46 -- scripts/common.sh@344 -- # : 1 00:07:23.297 21:14:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.297 21:14:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.297 21:14:46 -- scripts/common.sh@364 -- # decimal 1 00:07:23.297 21:14:46 -- scripts/common.sh@352 -- # local d=1 00:07:23.297 21:14:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.297 21:14:46 -- scripts/common.sh@354 -- # echo 1 00:07:23.297 21:14:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.297 21:14:46 -- scripts/common.sh@365 -- # decimal 2 00:07:23.297 21:14:46 -- scripts/common.sh@352 -- # local d=2 00:07:23.297 21:14:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.297 21:14:46 -- scripts/common.sh@354 -- # echo 2 00:07:23.297 21:14:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.297 21:14:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.297 21:14:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.297 21:14:46 -- scripts/common.sh@367 -- # return 0 00:07:23.297 21:14:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.297 21:14:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.297 --rc genhtml_branch_coverage=1 00:07:23.297 --rc genhtml_function_coverage=1 00:07:23.297 --rc genhtml_legend=1 00:07:23.297 --rc geninfo_all_blocks=1 00:07:23.297 --rc geninfo_unexecuted_blocks=1 00:07:23.297 00:07:23.297 ' 00:07:23.297 21:14:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.297 --rc genhtml_branch_coverage=1 00:07:23.297 --rc genhtml_function_coverage=1 00:07:23.297 --rc genhtml_legend=1 00:07:23.297 --rc geninfo_all_blocks=1 00:07:23.297 --rc geninfo_unexecuted_blocks=1 00:07:23.297 00:07:23.297 ' 00:07:23.297 21:14:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.297 --rc genhtml_branch_coverage=1 00:07:23.297 --rc genhtml_function_coverage=1 00:07:23.297 --rc genhtml_legend=1 00:07:23.297 --rc geninfo_all_blocks=1 00:07:23.297 --rc geninfo_unexecuted_blocks=1 00:07:23.297 00:07:23.297 ' 00:07:23.297 21:14:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.297 --rc genhtml_branch_coverage=1 00:07:23.297 --rc genhtml_function_coverage=1 00:07:23.297 --rc genhtml_legend=1 00:07:23.297 --rc geninfo_all_blocks=1 00:07:23.297 --rc geninfo_unexecuted_blocks=1 00:07:23.297 00:07:23.297 ' 00:07:23.297 21:14:46 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.297 21:14:46 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69084 00:07:23.297 21:14:46 -- app/cmdline.sh@18 -- # waitforlisten 69084 00:07:23.297 21:14:46 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.297 21:14:46 -- common/autotest_common.sh@829 -- # '[' -z 69084 ']' 00:07:23.297 21:14:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.297 21:14:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.297 21:14:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.297 21:14:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.297 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.297 [2024-11-28 21:14:46.899486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.297 [2024-11-28 21:14:46.899616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69084 ] 00:07:23.297 [2024-11-28 21:14:47.038613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.556 [2024-11-28 21:14:47.072950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.556 [2024-11-28 21:14:47.073137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.493 21:14:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.493 21:14:47 -- common/autotest_common.sh@862 -- # return 0 00:07:24.493 21:14:47 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.493 { 00:07:24.493 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:24.493 "fields": { 00:07:24.493 "major": 24, 00:07:24.493 "minor": 1, 00:07:24.494 "patch": 1, 00:07:24.494 "suffix": "-pre", 00:07:24.494 "commit": "c13c99a5e" 00:07:24.494 } 00:07:24.494 } 00:07:24.494 21:14:48 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.494 21:14:48 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.494 21:14:48 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.494 21:14:48 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.494 21:14:48 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.494 21:14:48 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.494 21:14:48 -- app/cmdline.sh@26 -- # sort 00:07:24.494 21:14:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.494 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:07:24.494 21:14:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.494 21:14:48 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.494 21:14:48 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.494 21:14:48 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.494 21:14:48 -- common/autotest_common.sh@650 -- # local es=0 00:07:24.494 21:14:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.494 21:14:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.494 21:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.494 21:14:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.494 21:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.494 21:14:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.494 21:14:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.494 21:14:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.494 21:14:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.494 21:14:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.752 request: 00:07:24.752 { 00:07:24.752 "method": "env_dpdk_get_mem_stats", 00:07:24.752 "req_id": 1 00:07:24.752 } 00:07:24.752 Got JSON-RPC error response 00:07:24.752 response: 00:07:24.752 { 00:07:24.752 "code": -32601, 00:07:24.752 "message": "Method not found" 00:07:24.752 } 00:07:24.752 21:14:48 -- common/autotest_common.sh@653 -- # es=1 00:07:24.752 21:14:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.752 21:14:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.752 21:14:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.752 21:14:48 -- app/cmdline.sh@1 -- # killprocess 69084 00:07:24.752 21:14:48 -- common/autotest_common.sh@936 -- # '[' -z 69084 ']' 00:07:24.752 21:14:48 -- common/autotest_common.sh@940 -- # kill -0 69084 00:07:24.752 21:14:48 -- common/autotest_common.sh@941 -- # uname 00:07:24.752 21:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.752 21:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69084 00:07:25.011 21:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.011 21:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.011 killing process with pid 69084 00:07:25.011 21:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69084' 00:07:25.011 21:14:48 -- common/autotest_common.sh@955 -- # kill 69084 00:07:25.011 21:14:48 -- common/autotest_common.sh@960 -- # wait 69084 00:07:25.011 00:07:25.011 real 0m2.045s 00:07:25.011 user 0m2.694s 00:07:25.011 sys 0m0.380s 00:07:25.011 21:14:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.011 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.011 ************************************ 00:07:25.011 END TEST app_cmdline 00:07:25.011 ************************************ 00:07:25.011 21:14:48 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.011 21:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.011 21:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.011 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.270 ************************************ 00:07:25.270 START TEST version 00:07:25.270 ************************************ 00:07:25.270 21:14:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.270 * Looking for test storage... 00:07:25.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.270 21:14:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.270 21:14:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.270 21:14:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.270 21:14:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.270 21:14:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.270 21:14:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.270 21:14:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.270 21:14:48 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.270 21:14:48 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.270 21:14:48 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.270 21:14:48 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.270 21:14:48 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.270 21:14:48 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.270 21:14:48 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.270 21:14:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.270 21:14:48 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.270 21:14:48 -- scripts/common.sh@344 -- # : 1 00:07:25.270 21:14:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.270 21:14:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.270 21:14:48 -- scripts/common.sh@364 -- # decimal 1 00:07:25.270 21:14:48 -- scripts/common.sh@352 -- # local d=1 00:07:25.270 21:14:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.270 21:14:48 -- scripts/common.sh@354 -- # echo 1 00:07:25.270 21:14:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.270 21:14:48 -- scripts/common.sh@365 -- # decimal 2 00:07:25.270 21:14:48 -- scripts/common.sh@352 -- # local d=2 00:07:25.270 21:14:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.270 21:14:48 -- scripts/common.sh@354 -- # echo 2 00:07:25.270 21:14:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.270 21:14:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.270 21:14:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.270 21:14:48 -- scripts/common.sh@367 -- # return 0 00:07:25.270 21:14:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.270 21:14:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.270 --rc genhtml_branch_coverage=1 00:07:25.270 --rc genhtml_function_coverage=1 00:07:25.270 --rc genhtml_legend=1 00:07:25.270 --rc geninfo_all_blocks=1 00:07:25.270 --rc geninfo_unexecuted_blocks=1 00:07:25.270 00:07:25.270 ' 00:07:25.270 21:14:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.270 --rc genhtml_branch_coverage=1 00:07:25.270 --rc genhtml_function_coverage=1 00:07:25.270 --rc genhtml_legend=1 00:07:25.270 --rc geninfo_all_blocks=1 00:07:25.270 --rc geninfo_unexecuted_blocks=1 00:07:25.270 00:07:25.270 ' 00:07:25.270 21:14:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.270 --rc genhtml_branch_coverage=1 00:07:25.270 --rc genhtml_function_coverage=1 00:07:25.270 --rc genhtml_legend=1 00:07:25.270 --rc geninfo_all_blocks=1 00:07:25.270 --rc geninfo_unexecuted_blocks=1 00:07:25.270 00:07:25.270 ' 00:07:25.270 21:14:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.270 --rc genhtml_branch_coverage=1 00:07:25.270 --rc genhtml_function_coverage=1 00:07:25.271 --rc genhtml_legend=1 00:07:25.271 --rc geninfo_all_blocks=1 00:07:25.271 --rc geninfo_unexecuted_blocks=1 00:07:25.271 00:07:25.271 ' 00:07:25.271 21:14:48 -- app/version.sh@17 -- # get_header_version major 00:07:25.271 21:14:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.271 21:14:48 -- app/version.sh@14 -- # cut -f2 00:07:25.271 21:14:48 -- app/version.sh@14 -- # tr -d '"' 00:07:25.271 21:14:48 -- app/version.sh@17 -- # major=24 00:07:25.271 21:14:48 -- app/version.sh@18 -- # get_header_version minor 00:07:25.271 21:14:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.271 21:14:48 -- app/version.sh@14 -- # cut -f2 00:07:25.271 21:14:48 -- app/version.sh@14 -- # tr -d '"' 00:07:25.271 21:14:48 -- app/version.sh@18 -- # minor=1 00:07:25.271 21:14:48 -- app/version.sh@19 -- # get_header_version patch 00:07:25.271 21:14:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.271 21:14:48 -- app/version.sh@14 -- # cut -f2 00:07:25.271 21:14:48 -- app/version.sh@14 -- # tr -d '"' 00:07:25.271 21:14:48 -- app/version.sh@19 -- # patch=1 00:07:25.271 21:14:48 -- app/version.sh@20 -- # get_header_version suffix 00:07:25.271 21:14:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.271 21:14:48 -- app/version.sh@14 -- # tr -d '"' 00:07:25.271 21:14:48 -- app/version.sh@14 -- # cut -f2 00:07:25.271 21:14:48 -- app/version.sh@20 -- # suffix=-pre 00:07:25.271 21:14:48 -- app/version.sh@22 -- # version=24.1 00:07:25.271 21:14:48 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.271 21:14:48 -- app/version.sh@25 -- # version=24.1.1 00:07:25.271 21:14:48 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:25.271 21:14:48 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:25.271 21:14:48 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.271 21:14:49 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:25.271 21:14:49 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:25.271 00:07:25.271 real 0m0.250s 00:07:25.271 user 0m0.176s 00:07:25.271 sys 0m0.115s 00:07:25.271 21:14:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.271 21:14:49 -- common/autotest_common.sh@10 -- # set +x 00:07:25.271 ************************************ 00:07:25.271 END TEST version 00:07:25.271 ************************************ 00:07:25.530 21:14:49 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:25.530 21:14:49 -- spdk/autotest.sh@191 -- # uname -s 00:07:25.530 21:14:49 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:25.530 21:14:49 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.530 21:14:49 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:25.530 21:14:49 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:25.530 21:14:49 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:25.530 21:14:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.530 21:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.530 21:14:49 -- common/autotest_common.sh@10 -- # set +x 00:07:25.530 ************************************ 00:07:25.530 START TEST spdk_dd 00:07:25.530 ************************************ 00:07:25.530 21:14:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:25.530 * Looking for test storage... 00:07:25.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.530 21:14:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.530 21:14:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.530 21:14:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.530 21:14:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.530 21:14:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.530 21:14:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.530 21:14:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.530 21:14:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.530 21:14:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.530 21:14:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.530 21:14:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.530 21:14:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.530 21:14:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.530 21:14:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.530 21:14:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.530 21:14:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.530 21:14:49 -- scripts/common.sh@344 -- # : 1 00:07:25.530 21:14:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.530 21:14:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.530 21:14:49 -- scripts/common.sh@364 -- # decimal 1 00:07:25.530 21:14:49 -- scripts/common.sh@352 -- # local d=1 00:07:25.530 21:14:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.530 21:14:49 -- scripts/common.sh@354 -- # echo 1 00:07:25.530 21:14:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.530 21:14:49 -- scripts/common.sh@365 -- # decimal 2 00:07:25.530 21:14:49 -- scripts/common.sh@352 -- # local d=2 00:07:25.530 21:14:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.530 21:14:49 -- scripts/common.sh@354 -- # echo 2 00:07:25.530 21:14:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.530 21:14:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.530 21:14:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.530 21:14:49 -- scripts/common.sh@367 -- # return 0 00:07:25.530 21:14:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.530 21:14:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.530 --rc genhtml_branch_coverage=1 00:07:25.530 --rc genhtml_function_coverage=1 00:07:25.530 --rc genhtml_legend=1 00:07:25.530 --rc geninfo_all_blocks=1 00:07:25.530 --rc geninfo_unexecuted_blocks=1 00:07:25.530 00:07:25.530 ' 00:07:25.530 21:14:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.531 --rc genhtml_branch_coverage=1 00:07:25.531 --rc genhtml_function_coverage=1 00:07:25.531 --rc genhtml_legend=1 00:07:25.531 --rc geninfo_all_blocks=1 00:07:25.531 --rc geninfo_unexecuted_blocks=1 00:07:25.531 00:07:25.531 ' 00:07:25.531 21:14:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.531 --rc genhtml_branch_coverage=1 00:07:25.531 --rc genhtml_function_coverage=1 00:07:25.531 --rc genhtml_legend=1 00:07:25.531 --rc geninfo_all_blocks=1 00:07:25.531 --rc geninfo_unexecuted_blocks=1 00:07:25.531 00:07:25.531 ' 00:07:25.531 21:14:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.531 --rc genhtml_branch_coverage=1 00:07:25.531 --rc genhtml_function_coverage=1 00:07:25.531 --rc genhtml_legend=1 00:07:25.531 --rc geninfo_all_blocks=1 00:07:25.531 --rc geninfo_unexecuted_blocks=1 00:07:25.531 00:07:25.531 ' 00:07:25.531 21:14:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.531 21:14:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.531 21:14:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.531 21:14:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.531 21:14:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.531 21:14:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.531 21:14:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.531 21:14:49 -- paths/export.sh@5 -- # export PATH 00:07:25.531 21:14:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.531 21:14:49 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:26.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:26.100 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:26.100 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:26.100 21:14:49 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:26.100 21:14:49 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:26.100 21:14:49 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:26.100 21:14:49 -- scripts/common.sh@312 -- # local nvmes 00:07:26.100 21:14:49 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:26.100 21:14:49 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:26.100 21:14:49 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:26.100 21:14:49 -- scripts/common.sh@297 -- # local bdf= 00:07:26.100 21:14:49 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:26.100 21:14:49 -- scripts/common.sh@232 -- # local class 00:07:26.100 21:14:49 -- scripts/common.sh@233 -- # local subclass 00:07:26.100 21:14:49 -- scripts/common.sh@234 -- # local progif 00:07:26.100 21:14:49 -- scripts/common.sh@235 -- # printf %02x 1 00:07:26.100 21:14:49 -- scripts/common.sh@235 -- # class=01 00:07:26.100 21:14:49 -- scripts/common.sh@236 -- # printf %02x 8 00:07:26.100 21:14:49 -- scripts/common.sh@236 -- # subclass=08 00:07:26.100 21:14:49 -- scripts/common.sh@237 -- # printf %02x 2 00:07:26.100 21:14:49 -- scripts/common.sh@237 -- # progif=02 00:07:26.100 21:14:49 -- scripts/common.sh@239 -- # hash lspci 00:07:26.100 21:14:49 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:26.100 21:14:49 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:26.100 21:14:49 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:26.100 21:14:49 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:26.100 21:14:49 -- scripts/common.sh@244 -- # tr -d '"' 00:07:26.100 21:14:49 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:26.100 21:14:49 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:26.100 21:14:49 -- scripts/common.sh@15 -- # local i 00:07:26.100 21:14:49 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:26.100 21:14:49 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:26.100 21:14:49 -- scripts/common.sh@24 -- # return 0 00:07:26.100 21:14:49 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:26.100 21:14:49 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:26.100 21:14:49 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:26.100 21:14:49 -- scripts/common.sh@15 -- # local i 00:07:26.100 21:14:49 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:26.100 21:14:49 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:26.100 21:14:49 -- scripts/common.sh@24 -- # return 0 00:07:26.100 21:14:49 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:26.100 21:14:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:26.100 21:14:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:26.100 21:14:49 -- scripts/common.sh@322 -- # uname -s 00:07:26.100 21:14:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:26.100 21:14:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:26.100 21:14:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:26.100 21:14:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:26.100 21:14:49 -- scripts/common.sh@322 -- # uname -s 00:07:26.101 21:14:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:26.101 21:14:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:26.101 21:14:49 -- scripts/common.sh@327 -- # (( 2 )) 00:07:26.101 21:14:49 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:26.101 21:14:49 -- dd/dd.sh@13 -- # check_liburing 00:07:26.101 21:14:49 -- dd/common.sh@139 -- # local lib so 00:07:26.101 21:14:49 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:26.101 21:14:49 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.101 21:14:49 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:26.101 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:26.102 21:14:49 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:26.102 21:14:49 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:26.102 * spdk_dd linked to liburing 00:07:26.102 21:14:49 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:26.102 21:14:49 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:26.102 21:14:49 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:26.102 21:14:49 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:26.102 21:14:49 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:26.102 21:14:49 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:26.102 21:14:49 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:26.102 21:14:49 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:26.102 21:14:49 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:26.102 21:14:49 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:26.102 21:14:49 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:26.102 21:14:49 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:26.102 21:14:49 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:26.102 21:14:49 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:26.102 21:14:49 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:26.102 21:14:49 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:26.102 21:14:49 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:26.102 21:14:49 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:26.102 21:14:49 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:26.102 21:14:49 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:26.102 21:14:49 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:26.102 21:14:49 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:26.102 21:14:49 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:26.102 21:14:49 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:26.102 21:14:49 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:26.102 21:14:49 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:26.102 21:14:49 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:26.102 21:14:49 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:26.102 21:14:49 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:26.102 21:14:49 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:26.102 21:14:49 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:26.102 21:14:49 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:26.102 21:14:49 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:26.102 21:14:49 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:26.102 21:14:49 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:26.102 21:14:49 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:26.102 21:14:49 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:26.102 21:14:49 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:26.102 21:14:49 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:26.102 21:14:49 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:26.102 21:14:49 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:26.102 21:14:49 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:26.102 21:14:49 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:26.102 21:14:49 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:26.102 21:14:49 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:26.102 21:14:49 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:26.102 21:14:49 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:26.102 21:14:49 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:26.102 21:14:49 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:26.102 21:14:49 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:26.102 21:14:49 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:26.102 21:14:49 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:26.102 21:14:49 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:26.102 21:14:49 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:26.102 21:14:49 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:26.102 21:14:49 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:26.102 21:14:49 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:26.102 21:14:49 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:26.102 21:14:49 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:26.102 21:14:49 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:26.102 21:14:49 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:26.102 21:14:49 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:26.102 21:14:49 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:26.102 21:14:49 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:26.102 21:14:49 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:26.102 21:14:49 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:26.102 21:14:49 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:26.102 21:14:49 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:26.102 21:14:49 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:26.102 21:14:49 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:26.102 21:14:49 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:26.102 21:14:49 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:26.102 21:14:49 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:26.102 21:14:49 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:26.102 21:14:49 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:26.103 21:14:49 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:26.103 21:14:49 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:26.103 21:14:49 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:26.103 21:14:49 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:26.103 21:14:49 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:26.103 21:14:49 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:26.103 21:14:49 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:26.103 21:14:49 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:26.103 21:14:49 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:26.103 21:14:49 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:26.103 21:14:49 -- dd/common.sh@157 -- # return 0 00:07:26.103 21:14:49 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:26.103 21:14:49 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:26.103 21:14:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:26.103 21:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.103 21:14:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.103 ************************************ 00:07:26.103 START TEST spdk_dd_basic_rw 00:07:26.103 ************************************ 00:07:26.103 21:14:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:26.103 * Looking for test storage... 00:07:26.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:26.103 21:14:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.103 21:14:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.103 21:14:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.362 21:14:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.362 21:14:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.362 21:14:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.362 21:14:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.362 21:14:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.362 21:14:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.362 21:14:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.362 21:14:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.362 21:14:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.362 21:14:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.362 21:14:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.362 21:14:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.362 21:14:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.362 21:14:49 -- scripts/common.sh@344 -- # : 1 00:07:26.362 21:14:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.362 21:14:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.362 21:14:49 -- scripts/common.sh@364 -- # decimal 1 00:07:26.362 21:14:49 -- scripts/common.sh@352 -- # local d=1 00:07:26.362 21:14:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.362 21:14:49 -- scripts/common.sh@354 -- # echo 1 00:07:26.362 21:14:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.362 21:14:49 -- scripts/common.sh@365 -- # decimal 2 00:07:26.362 21:14:49 -- scripts/common.sh@352 -- # local d=2 00:07:26.362 21:14:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.362 21:14:49 -- scripts/common.sh@354 -- # echo 2 00:07:26.362 21:14:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.362 21:14:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.362 21:14:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.362 21:14:49 -- scripts/common.sh@367 -- # return 0 00:07:26.362 21:14:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.362 21:14:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.362 --rc genhtml_branch_coverage=1 00:07:26.362 --rc genhtml_function_coverage=1 00:07:26.362 --rc genhtml_legend=1 00:07:26.362 --rc geninfo_all_blocks=1 00:07:26.362 --rc geninfo_unexecuted_blocks=1 00:07:26.362 00:07:26.362 ' 00:07:26.362 21:14:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.362 --rc genhtml_branch_coverage=1 00:07:26.362 --rc genhtml_function_coverage=1 00:07:26.362 --rc genhtml_legend=1 00:07:26.362 --rc geninfo_all_blocks=1 00:07:26.362 --rc geninfo_unexecuted_blocks=1 00:07:26.362 00:07:26.362 ' 00:07:26.362 21:14:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.362 --rc genhtml_branch_coverage=1 00:07:26.362 --rc genhtml_function_coverage=1 00:07:26.362 --rc genhtml_legend=1 00:07:26.362 --rc geninfo_all_blocks=1 00:07:26.362 --rc geninfo_unexecuted_blocks=1 00:07:26.362 00:07:26.362 ' 00:07:26.362 21:14:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.362 --rc genhtml_branch_coverage=1 00:07:26.362 --rc genhtml_function_coverage=1 00:07:26.362 --rc genhtml_legend=1 00:07:26.362 --rc geninfo_all_blocks=1 00:07:26.362 --rc geninfo_unexecuted_blocks=1 00:07:26.362 00:07:26.362 ' 00:07:26.362 21:14:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.362 21:14:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.362 21:14:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.362 21:14:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.362 21:14:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.363 21:14:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.363 21:14:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.363 21:14:49 -- paths/export.sh@5 -- # export PATH 00:07:26.363 21:14:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.363 21:14:49 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:26.363 21:14:49 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:26.363 21:14:49 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:26.363 21:14:49 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:26.363 21:14:49 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:26.363 21:14:49 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:26.363 21:14:49 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:26.363 21:14:49 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.363 21:14:49 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.363 21:14:49 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:26.363 21:14:49 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:26.363 21:14:49 -- dd/common.sh@126 -- # mapfile -t id 00:07:26.363 21:14:49 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:26.624 21:14:50 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 101 Data Units Written: 9 Host Read Commands: 2328 Host Write Commands: 96 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:26.624 21:14:50 -- dd/common.sh@130 -- # lbaf=04 00:07:26.624 21:14:50 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 101 Data Units Written: 9 Host Read Commands: 2328 Host Write Commands: 96 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:26.624 21:14:50 -- dd/common.sh@132 -- # lbaf=4096 00:07:26.624 21:14:50 -- dd/common.sh@134 -- # echo 4096 00:07:26.624 21:14:50 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:26.624 21:14:50 -- dd/basic_rw.sh@96 -- # : 00:07:26.624 21:14:50 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.624 21:14:50 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:26.624 21:14:50 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.624 21:14:50 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:26.624 21:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.624 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:26.624 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:26.624 ************************************ 00:07:26.624 START TEST dd_bs_lt_native_bs 00:07:26.624 ************************************ 00:07:26.624 21:14:50 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.624 21:14:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:26.624 21:14:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.624 21:14:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.624 21:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.624 21:14:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.624 21:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.624 21:14:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.625 21:14:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.625 21:14:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.625 21:14:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.625 21:14:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.625 { 00:07:26.625 "subsystems": [ 00:07:26.625 { 00:07:26.625 "subsystem": "bdev", 00:07:26.625 "config": [ 00:07:26.625 { 00:07:26.625 "params": { 00:07:26.625 "trtype": "pcie", 00:07:26.625 "traddr": "0000:00:06.0", 00:07:26.625 "name": "Nvme0" 00:07:26.625 }, 00:07:26.625 "method": "bdev_nvme_attach_controller" 00:07:26.625 }, 00:07:26.625 { 00:07:26.625 "method": "bdev_wait_for_examine" 00:07:26.625 } 00:07:26.625 ] 00:07:26.625 } 00:07:26.625 ] 00:07:26.625 } 00:07:26.625 [2024-11-28 21:14:50.187507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.625 [2024-11-28 21:14:50.187631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69433 ] 00:07:26.625 [2024-11-28 21:14:50.327820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.884 [2024-11-28 21:14:50.367760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.884 [2024-11-28 21:14:50.484148] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:26.884 [2024-11-28 21:14:50.484212] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.884 [2024-11-28 21:14:50.553499] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:26.884 21:14:50 -- common/autotest_common.sh@653 -- # es=234 00:07:26.884 21:14:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.884 21:14:50 -- common/autotest_common.sh@662 -- # es=106 00:07:26.884 21:14:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:26.884 21:14:50 -- common/autotest_common.sh@670 -- # es=1 00:07:26.884 21:14:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.884 00:07:26.884 real 0m0.486s 00:07:26.884 user 0m0.331s 00:07:26.884 sys 0m0.107s 00:07:26.884 ************************************ 00:07:26.884 END TEST dd_bs_lt_native_bs 00:07:26.884 ************************************ 00:07:26.884 21:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.884 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:27.142 21:14:50 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:27.142 21:14:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:27.142 21:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.142 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:27.142 ************************************ 00:07:27.142 START TEST dd_rw 00:07:27.142 ************************************ 00:07:27.142 21:14:50 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:27.142 21:14:50 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:27.142 21:14:50 -- dd/basic_rw.sh@12 -- # local count size 00:07:27.142 21:14:50 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:27.142 21:14:50 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:27.142 21:14:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:27.142 21:14:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:27.142 21:14:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:27.142 21:14:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:27.142 21:14:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:27.142 21:14:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:27.142 21:14:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:27.142 21:14:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:27.143 21:14:50 -- dd/basic_rw.sh@23 -- # count=15 00:07:27.143 21:14:50 -- dd/basic_rw.sh@24 -- # count=15 00:07:27.143 21:14:50 -- dd/basic_rw.sh@25 -- # size=61440 00:07:27.143 21:14:50 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:27.143 21:14:50 -- dd/common.sh@98 -- # xtrace_disable 00:07:27.143 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:27.712 21:14:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:27.712 21:14:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:27.712 21:14:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.712 21:14:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.712 [2024-11-28 21:14:51.336166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.712 [2024-11-28 21:14:51.336451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69464 ] 00:07:27.712 { 00:07:27.712 "subsystems": [ 00:07:27.712 { 00:07:27.712 "subsystem": "bdev", 00:07:27.712 "config": [ 00:07:27.712 { 00:07:27.712 "params": { 00:07:27.712 "trtype": "pcie", 00:07:27.712 "traddr": "0000:00:06.0", 00:07:27.712 "name": "Nvme0" 00:07:27.712 }, 00:07:27.712 "method": "bdev_nvme_attach_controller" 00:07:27.712 }, 00:07:27.712 { 00:07:27.712 "method": "bdev_wait_for_examine" 00:07:27.712 } 00:07:27.712 ] 00:07:27.712 } 00:07:27.712 ] 00:07:27.712 } 00:07:27.972 [2024-11-28 21:14:51.476072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.972 [2024-11-28 21:14:51.515112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.972  [2024-11-28T21:14:51.975Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:28.232 00:07:28.232 21:14:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:28.232 21:14:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:28.232 21:14:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:28.232 21:14:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.232 [2024-11-28 21:14:51.815656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.232 [2024-11-28 21:14:51.815942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69477 ] 00:07:28.232 { 00:07:28.232 "subsystems": [ 00:07:28.232 { 00:07:28.232 "subsystem": "bdev", 00:07:28.233 "config": [ 00:07:28.233 { 00:07:28.233 "params": { 00:07:28.233 "trtype": "pcie", 00:07:28.233 "traddr": "0000:00:06.0", 00:07:28.233 "name": "Nvme0" 00:07:28.233 }, 00:07:28.233 "method": "bdev_nvme_attach_controller" 00:07:28.233 }, 00:07:28.233 { 00:07:28.233 "method": "bdev_wait_for_examine" 00:07:28.233 } 00:07:28.233 ] 00:07:28.233 } 00:07:28.233 ] 00:07:28.233 } 00:07:28.233 [2024-11-28 21:14:51.949781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.491 [2024-11-28 21:14:51.982292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.491  [2024-11-28T21:14:52.492Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:28.749 00:07:28.750 21:14:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.750 21:14:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:28.750 21:14:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:28.750 21:14:52 -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.750 21:14:52 -- dd/common.sh@12 -- # local size=61440 00:07:28.750 21:14:52 -- dd/common.sh@14 -- # local bs=1048576 00:07:28.750 21:14:52 -- dd/common.sh@15 -- # local count=1 00:07:28.750 21:14:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:28.750 21:14:52 -- dd/common.sh@18 -- # gen_conf 00:07:28.750 21:14:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:28.750 21:14:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 [2024-11-28 21:14:52.312949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.750 [2024-11-28 21:14:52.313111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69490 ] 00:07:28.750 { 00:07:28.750 "subsystems": [ 00:07:28.750 { 00:07:28.750 "subsystem": "bdev", 00:07:28.750 "config": [ 00:07:28.750 { 00:07:28.750 "params": { 00:07:28.750 "trtype": "pcie", 00:07:28.750 "traddr": "0000:00:06.0", 00:07:28.750 "name": "Nvme0" 00:07:28.750 }, 00:07:28.750 "method": "bdev_nvme_attach_controller" 00:07:28.750 }, 00:07:28.750 { 00:07:28.750 "method": "bdev_wait_for_examine" 00:07:28.750 } 00:07:28.750 ] 00:07:28.750 } 00:07:28.750 ] 00:07:28.750 } 00:07:28.750 [2024-11-28 21:14:52.456657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.750 [2024-11-28 21:14:52.486399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.009  [2024-11-28T21:14:53.011Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:29.268 00:07:29.268 21:14:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:29.268 21:14:52 -- dd/basic_rw.sh@23 -- # count=15 00:07:29.268 21:14:52 -- dd/basic_rw.sh@24 -- # count=15 00:07:29.268 21:14:52 -- dd/basic_rw.sh@25 -- # size=61440 00:07:29.268 21:14:52 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:29.268 21:14:52 -- dd/common.sh@98 -- # xtrace_disable 00:07:29.268 21:14:52 -- common/autotest_common.sh@10 -- # set +x 00:07:29.836 21:14:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:29.836 21:14:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:29.836 21:14:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:29.836 21:14:53 -- common/autotest_common.sh@10 -- # set +x 00:07:29.836 [2024-11-28 21:14:53.363980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.836 [2024-11-28 21:14:53.364503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69508 ] 00:07:29.836 { 00:07:29.836 "subsystems": [ 00:07:29.836 { 00:07:29.836 "subsystem": "bdev", 00:07:29.836 "config": [ 00:07:29.836 { 00:07:29.836 "params": { 00:07:29.836 "trtype": "pcie", 00:07:29.836 "traddr": "0000:00:06.0", 00:07:29.836 "name": "Nvme0" 00:07:29.836 }, 00:07:29.836 "method": "bdev_nvme_attach_controller" 00:07:29.836 }, 00:07:29.836 { 00:07:29.836 "method": "bdev_wait_for_examine" 00:07:29.836 } 00:07:29.836 ] 00:07:29.836 } 00:07:29.836 ] 00:07:29.836 } 00:07:29.836 [2024-11-28 21:14:53.503796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.836 [2024-11-28 21:14:53.533726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.095  [2024-11-28T21:14:53.838Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:30.095 00:07:30.095 21:14:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:30.095 21:14:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:30.095 21:14:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.095 21:14:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.354 [2024-11-28 21:14:53.842080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.354 [2024-11-28 21:14:53.842163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69521 ] 00:07:30.354 { 00:07:30.354 "subsystems": [ 00:07:30.354 { 00:07:30.354 "subsystem": "bdev", 00:07:30.354 "config": [ 00:07:30.354 { 00:07:30.354 "params": { 00:07:30.354 "trtype": "pcie", 00:07:30.354 "traddr": "0000:00:06.0", 00:07:30.354 "name": "Nvme0" 00:07:30.354 }, 00:07:30.354 "method": "bdev_nvme_attach_controller" 00:07:30.354 }, 00:07:30.354 { 00:07:30.354 "method": "bdev_wait_for_examine" 00:07:30.354 } 00:07:30.354 ] 00:07:30.354 } 00:07:30.354 ] 00:07:30.354 } 00:07:30.354 [2024-11-28 21:14:53.980482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.354 [2024-11-28 21:14:54.011114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.612  [2024-11-28T21:14:54.355Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:30.612 00:07:30.612 21:14:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.612 21:14:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:30.612 21:14:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:30.612 21:14:54 -- dd/common.sh@11 -- # local nvme_ref= 00:07:30.612 21:14:54 -- dd/common.sh@12 -- # local size=61440 00:07:30.612 21:14:54 -- dd/common.sh@14 -- # local bs=1048576 00:07:30.612 21:14:54 -- dd/common.sh@15 -- # local count=1 00:07:30.612 21:14:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:30.612 21:14:54 -- dd/common.sh@18 -- # gen_conf 00:07:30.612 21:14:54 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.612 21:14:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 [2024-11-28 21:14:54.305209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.612 [2024-11-28 21:14:54.305945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69529 ] 00:07:30.612 { 00:07:30.612 "subsystems": [ 00:07:30.612 { 00:07:30.612 "subsystem": "bdev", 00:07:30.612 "config": [ 00:07:30.612 { 00:07:30.612 "params": { 00:07:30.612 "trtype": "pcie", 00:07:30.612 "traddr": "0000:00:06.0", 00:07:30.612 "name": "Nvme0" 00:07:30.612 }, 00:07:30.612 "method": "bdev_nvme_attach_controller" 00:07:30.612 }, 00:07:30.612 { 00:07:30.612 "method": "bdev_wait_for_examine" 00:07:30.612 } 00:07:30.612 ] 00:07:30.612 } 00:07:30.612 ] 00:07:30.612 } 00:07:30.871 [2024-11-28 21:14:54.443765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.871 [2024-11-28 21:14:54.473136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.871  [2024-11-28T21:14:54.873Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:31.130 00:07:31.130 21:14:54 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:31.130 21:14:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:31.130 21:14:54 -- dd/basic_rw.sh@23 -- # count=7 00:07:31.130 21:14:54 -- dd/basic_rw.sh@24 -- # count=7 00:07:31.130 21:14:54 -- dd/basic_rw.sh@25 -- # size=57344 00:07:31.130 21:14:54 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:31.130 21:14:54 -- dd/common.sh@98 -- # xtrace_disable 00:07:31.130 21:14:54 -- common/autotest_common.sh@10 -- # set +x 00:07:31.698 21:14:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:31.698 21:14:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:31.698 21:14:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:31.698 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:07:31.698 [2024-11-28 21:14:55.284961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.698 [2024-11-28 21:14:55.285327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69547 ] 00:07:31.698 { 00:07:31.698 "subsystems": [ 00:07:31.698 { 00:07:31.698 "subsystem": "bdev", 00:07:31.698 "config": [ 00:07:31.698 { 00:07:31.698 "params": { 00:07:31.698 "trtype": "pcie", 00:07:31.698 "traddr": "0000:00:06.0", 00:07:31.698 "name": "Nvme0" 00:07:31.698 }, 00:07:31.698 "method": "bdev_nvme_attach_controller" 00:07:31.698 }, 00:07:31.698 { 00:07:31.698 "method": "bdev_wait_for_examine" 00:07:31.698 } 00:07:31.698 ] 00:07:31.698 } 00:07:31.698 ] 00:07:31.698 } 00:07:31.698 [2024-11-28 21:14:55.425792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.957 [2024-11-28 21:14:55.460533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.957  [2024-11-28T21:14:55.959Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:32.216 00:07:32.216 21:14:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:32.216 21:14:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:32.216 21:14:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.216 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.216 [2024-11-28 21:14:55.764700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.216 [2024-11-28 21:14:55.764790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69565 ] 00:07:32.216 { 00:07:32.216 "subsystems": [ 00:07:32.216 { 00:07:32.216 "subsystem": "bdev", 00:07:32.216 "config": [ 00:07:32.216 { 00:07:32.216 "params": { 00:07:32.216 "trtype": "pcie", 00:07:32.216 "traddr": "0000:00:06.0", 00:07:32.216 "name": "Nvme0" 00:07:32.216 }, 00:07:32.216 "method": "bdev_nvme_attach_controller" 00:07:32.216 }, 00:07:32.216 { 00:07:32.216 "method": "bdev_wait_for_examine" 00:07:32.216 } 00:07:32.216 ] 00:07:32.216 } 00:07:32.216 ] 00:07:32.216 } 00:07:32.216 [2024-11-28 21:14:55.900321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.216 [2024-11-28 21:14:55.929680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.475  [2024-11-28T21:14:56.218Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:32.475 00:07:32.475 21:14:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.475 21:14:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:32.475 21:14:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.475 21:14:56 -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.475 21:14:56 -- dd/common.sh@12 -- # local size=57344 00:07:32.475 21:14:56 -- dd/common.sh@14 -- # local bs=1048576 00:07:32.475 21:14:56 -- dd/common.sh@15 -- # local count=1 00:07:32.475 21:14:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.475 21:14:56 -- dd/common.sh@18 -- # gen_conf 00:07:32.475 21:14:56 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.475 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:07:32.734 [2024-11-28 21:14:56.239971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.734 [2024-11-28 21:14:56.240083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69573 ] 00:07:32.734 { 00:07:32.734 "subsystems": [ 00:07:32.734 { 00:07:32.734 "subsystem": "bdev", 00:07:32.734 "config": [ 00:07:32.734 { 00:07:32.734 "params": { 00:07:32.734 "trtype": "pcie", 00:07:32.734 "traddr": "0000:00:06.0", 00:07:32.734 "name": "Nvme0" 00:07:32.734 }, 00:07:32.734 "method": "bdev_nvme_attach_controller" 00:07:32.734 }, 00:07:32.734 { 00:07:32.734 "method": "bdev_wait_for_examine" 00:07:32.734 } 00:07:32.734 ] 00:07:32.734 } 00:07:32.734 ] 00:07:32.734 } 00:07:32.734 [2024-11-28 21:14:56.377496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.734 [2024-11-28 21:14:56.410628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.993  [2024-11-28T21:14:56.736Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:32.993 00:07:32.993 21:14:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:32.993 21:14:56 -- dd/basic_rw.sh@23 -- # count=7 00:07:32.993 21:14:56 -- dd/basic_rw.sh@24 -- # count=7 00:07:32.993 21:14:56 -- dd/basic_rw.sh@25 -- # size=57344 00:07:32.993 21:14:56 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:32.993 21:14:56 -- dd/common.sh@98 -- # xtrace_disable 00:07:32.993 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.563 21:14:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:33.563 21:14:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:33.563 21:14:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:33.563 21:14:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.563 [2024-11-28 21:14:57.193442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.563 [2024-11-28 21:14:57.193725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69591 ] 00:07:33.563 { 00:07:33.563 "subsystems": [ 00:07:33.563 { 00:07:33.563 "subsystem": "bdev", 00:07:33.563 "config": [ 00:07:33.563 { 00:07:33.563 "params": { 00:07:33.563 "trtype": "pcie", 00:07:33.563 "traddr": "0000:00:06.0", 00:07:33.563 "name": "Nvme0" 00:07:33.563 }, 00:07:33.563 "method": "bdev_nvme_attach_controller" 00:07:33.563 }, 00:07:33.563 { 00:07:33.563 "method": "bdev_wait_for_examine" 00:07:33.563 } 00:07:33.563 ] 00:07:33.563 } 00:07:33.563 ] 00:07:33.563 } 00:07:33.823 [2024-11-28 21:14:57.331332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.823 [2024-11-28 21:14:57.360917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.823  [2024-11-28T21:14:57.825Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:34.082 00:07:34.082 21:14:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:34.082 21:14:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:34.082 21:14:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:34.082 21:14:57 -- common/autotest_common.sh@10 -- # set +x 00:07:34.082 [2024-11-28 21:14:57.639700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.082 [2024-11-28 21:14:57.639942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69603 ] 00:07:34.082 { 00:07:34.082 "subsystems": [ 00:07:34.082 { 00:07:34.082 "subsystem": "bdev", 00:07:34.082 "config": [ 00:07:34.082 { 00:07:34.082 "params": { 00:07:34.082 "trtype": "pcie", 00:07:34.082 "traddr": "0000:00:06.0", 00:07:34.082 "name": "Nvme0" 00:07:34.082 }, 00:07:34.082 "method": "bdev_nvme_attach_controller" 00:07:34.082 }, 00:07:34.082 { 00:07:34.082 "method": "bdev_wait_for_examine" 00:07:34.082 } 00:07:34.082 ] 00:07:34.082 } 00:07:34.082 ] 00:07:34.082 } 00:07:34.082 [2024-11-28 21:14:57.768569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.082 [2024-11-28 21:14:57.798948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.341  [2024-11-28T21:14:58.084Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:34.341 00:07:34.341 21:14:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.341 21:14:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:34.341 21:14:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.341 21:14:58 -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.341 21:14:58 -- dd/common.sh@12 -- # local size=57344 00:07:34.341 21:14:58 -- dd/common.sh@14 -- # local bs=1048576 00:07:34.341 21:14:58 -- dd/common.sh@15 -- # local count=1 00:07:34.341 21:14:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:34.341 21:14:58 -- dd/common.sh@18 -- # gen_conf 00:07:34.341 21:14:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:34.342 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:07:34.601 [2024-11-28 21:14:58.099449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.601 [2024-11-28 21:14:58.099560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69617 ] 00:07:34.601 { 00:07:34.601 "subsystems": [ 00:07:34.601 { 00:07:34.601 "subsystem": "bdev", 00:07:34.601 "config": [ 00:07:34.601 { 00:07:34.601 "params": { 00:07:34.601 "trtype": "pcie", 00:07:34.601 "traddr": "0000:00:06.0", 00:07:34.601 "name": "Nvme0" 00:07:34.601 }, 00:07:34.601 "method": "bdev_nvme_attach_controller" 00:07:34.601 }, 00:07:34.601 { 00:07:34.601 "method": "bdev_wait_for_examine" 00:07:34.601 } 00:07:34.601 ] 00:07:34.601 } 00:07:34.601 ] 00:07:34.601 } 00:07:34.601 [2024-11-28 21:14:58.235702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.601 [2024-11-28 21:14:58.264852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.859  [2024-11-28T21:14:58.602Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.860 00:07:34.860 21:14:58 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:34.860 21:14:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:34.860 21:14:58 -- dd/basic_rw.sh@23 -- # count=3 00:07:34.860 21:14:58 -- dd/basic_rw.sh@24 -- # count=3 00:07:34.860 21:14:58 -- dd/basic_rw.sh@25 -- # size=49152 00:07:34.860 21:14:58 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:34.860 21:14:58 -- dd/common.sh@98 -- # xtrace_disable 00:07:34.860 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.427 21:14:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:35.427 21:14:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:35.427 21:14:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:35.427 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.427 [2024-11-28 21:14:58.984933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.427 [2024-11-28 21:14:58.985025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69638 ] 00:07:35.427 { 00:07:35.427 "subsystems": [ 00:07:35.427 { 00:07:35.427 "subsystem": "bdev", 00:07:35.427 "config": [ 00:07:35.427 { 00:07:35.427 "params": { 00:07:35.427 "trtype": "pcie", 00:07:35.427 "traddr": "0000:00:06.0", 00:07:35.427 "name": "Nvme0" 00:07:35.427 }, 00:07:35.427 "method": "bdev_nvme_attach_controller" 00:07:35.427 }, 00:07:35.427 { 00:07:35.427 "method": "bdev_wait_for_examine" 00:07:35.427 } 00:07:35.427 ] 00:07:35.427 } 00:07:35.427 ] 00:07:35.427 } 00:07:35.427 [2024-11-28 21:14:59.113095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.427 [2024-11-28 21:14:59.146247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.686  [2024-11-28T21:14:59.429Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:35.686 00:07:35.686 21:14:59 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:35.686 21:14:59 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:35.686 21:14:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:35.686 21:14:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.945 [2024-11-28 21:14:59.429551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.945 [2024-11-28 21:14:59.429649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69645 ] 00:07:35.945 { 00:07:35.945 "subsystems": [ 00:07:35.945 { 00:07:35.945 "subsystem": "bdev", 00:07:35.945 "config": [ 00:07:35.945 { 00:07:35.945 "params": { 00:07:35.945 "trtype": "pcie", 00:07:35.945 "traddr": "0000:00:06.0", 00:07:35.945 "name": "Nvme0" 00:07:35.945 }, 00:07:35.945 "method": "bdev_nvme_attach_controller" 00:07:35.945 }, 00:07:35.945 { 00:07:35.945 "method": "bdev_wait_for_examine" 00:07:35.945 } 00:07:35.945 ] 00:07:35.945 } 00:07:35.945 ] 00:07:35.945 } 00:07:35.945 [2024-11-28 21:14:59.565412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.945 [2024-11-28 21:14:59.596978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.204  [2024-11-28T21:14:59.947Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:36.204 00:07:36.204 21:14:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.204 21:14:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:36.204 21:14:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:36.204 21:14:59 -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.204 21:14:59 -- dd/common.sh@12 -- # local size=49152 00:07:36.204 21:14:59 -- dd/common.sh@14 -- # local bs=1048576 00:07:36.204 21:14:59 -- dd/common.sh@15 -- # local count=1 00:07:36.204 21:14:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:36.204 21:14:59 -- dd/common.sh@18 -- # gen_conf 00:07:36.204 21:14:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:36.204 21:14:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.204 { 00:07:36.204 "subsystems": [ 00:07:36.204 { 00:07:36.204 "subsystem": "bdev", 00:07:36.204 "config": [ 00:07:36.204 { 00:07:36.204 "params": { 00:07:36.204 "trtype": "pcie", 00:07:36.204 "traddr": "0000:00:06.0", 00:07:36.204 "name": "Nvme0" 00:07:36.204 }, 00:07:36.204 "method": "bdev_nvme_attach_controller" 00:07:36.204 }, 00:07:36.204 { 00:07:36.204 "method": "bdev_wait_for_examine" 00:07:36.204 } 00:07:36.204 ] 00:07:36.204 } 00:07:36.204 ] 00:07:36.204 } 00:07:36.204 [2024-11-28 21:14:59.891443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.204 [2024-11-28 21:14:59.891566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69664 ] 00:07:36.463 [2024-11-28 21:15:00.028024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.463 [2024-11-28 21:15:00.059823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.463  [2024-11-28T21:15:00.463Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:36.720 00:07:36.720 21:15:00 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:36.720 21:15:00 -- dd/basic_rw.sh@23 -- # count=3 00:07:36.720 21:15:00 -- dd/basic_rw.sh@24 -- # count=3 00:07:36.720 21:15:00 -- dd/basic_rw.sh@25 -- # size=49152 00:07:36.720 21:15:00 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:36.720 21:15:00 -- dd/common.sh@98 -- # xtrace_disable 00:07:36.720 21:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 21:15:00 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:37.287 21:15:00 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:37.287 21:15:00 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.287 21:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 [2024-11-28 21:15:00.835575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.287 [2024-11-28 21:15:00.835802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69676 ] 00:07:37.287 { 00:07:37.287 "subsystems": [ 00:07:37.287 { 00:07:37.287 "subsystem": "bdev", 00:07:37.287 "config": [ 00:07:37.287 { 00:07:37.287 "params": { 00:07:37.287 "trtype": "pcie", 00:07:37.287 "traddr": "0000:00:06.0", 00:07:37.287 "name": "Nvme0" 00:07:37.287 }, 00:07:37.287 "method": "bdev_nvme_attach_controller" 00:07:37.287 }, 00:07:37.287 { 00:07:37.287 "method": "bdev_wait_for_examine" 00:07:37.287 } 00:07:37.287 ] 00:07:37.287 } 00:07:37.287 ] 00:07:37.287 } 00:07:37.287 [2024-11-28 21:15:00.966361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.287 [2024-11-28 21:15:00.997412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.545  [2024-11-28T21:15:01.288Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:37.545 00:07:37.545 21:15:01 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:37.545 21:15:01 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:37.545 21:15:01 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.545 21:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.804 [2024-11-28 21:15:01.322391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.804 [2024-11-28 21:15:01.322662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69689 ] 00:07:37.804 { 00:07:37.804 "subsystems": [ 00:07:37.804 { 00:07:37.804 "subsystem": "bdev", 00:07:37.804 "config": [ 00:07:37.804 { 00:07:37.804 "params": { 00:07:37.804 "trtype": "pcie", 00:07:37.804 "traddr": "0000:00:06.0", 00:07:37.804 "name": "Nvme0" 00:07:37.804 }, 00:07:37.804 "method": "bdev_nvme_attach_controller" 00:07:37.804 }, 00:07:37.804 { 00:07:37.804 "method": "bdev_wait_for_examine" 00:07:37.804 } 00:07:37.804 ] 00:07:37.804 } 00:07:37.804 ] 00:07:37.804 } 00:07:37.804 [2024-11-28 21:15:01.461875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.804 [2024-11-28 21:15:01.501953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.062  [2024-11-28T21:15:01.805Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:38.062 00:07:38.062 21:15:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.062 21:15:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:38.062 21:15:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.062 21:15:01 -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.062 21:15:01 -- dd/common.sh@12 -- # local size=49152 00:07:38.062 21:15:01 -- dd/common.sh@14 -- # local bs=1048576 00:07:38.062 21:15:01 -- dd/common.sh@15 -- # local count=1 00:07:38.062 21:15:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.062 21:15:01 -- dd/common.sh@18 -- # gen_conf 00:07:38.062 21:15:01 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.062 21:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.321 [2024-11-28 21:15:01.838461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.321 [2024-11-28 21:15:01.838776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69702 ] 00:07:38.321 { 00:07:38.321 "subsystems": [ 00:07:38.321 { 00:07:38.321 "subsystem": "bdev", 00:07:38.321 "config": [ 00:07:38.321 { 00:07:38.321 "params": { 00:07:38.321 "trtype": "pcie", 00:07:38.321 "traddr": "0000:00:06.0", 00:07:38.321 "name": "Nvme0" 00:07:38.321 }, 00:07:38.321 "method": "bdev_nvme_attach_controller" 00:07:38.321 }, 00:07:38.321 { 00:07:38.321 "method": "bdev_wait_for_examine" 00:07:38.321 } 00:07:38.321 ] 00:07:38.321 } 00:07:38.321 ] 00:07:38.321 } 00:07:38.321 [2024-11-28 21:15:01.976783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.321 [2024-11-28 21:15:02.010840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.580  [2024-11-28T21:15:02.323Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:38.580 00:07:38.580 ************************************ 00:07:38.580 END TEST dd_rw 00:07:38.580 ************************************ 00:07:38.580 00:07:38.580 real 0m11.623s 00:07:38.580 user 0m8.475s 00:07:38.580 sys 0m2.046s 00:07:38.580 21:15:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.580 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:07:38.840 21:15:02 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:38.840 21:15:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.840 21:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.840 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:07:38.840 ************************************ 00:07:38.840 START TEST dd_rw_offset 00:07:38.840 ************************************ 00:07:38.840 21:15:02 -- common/autotest_common.sh@1114 -- # basic_offset 00:07:38.840 21:15:02 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:38.840 21:15:02 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:38.840 21:15:02 -- dd/common.sh@98 -- # xtrace_disable 00:07:38.840 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:07:38.840 21:15:02 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:38.840 21:15:02 -- dd/basic_rw.sh@56 -- # data=xlk9o7ec2yojy9i469o88awb9mimc9ooxkpsdch2a76t7drgyrsb4h4vkhv9b5th5up9ku4wxopj3i09udxe525m4xep8ve28t96snxgdi1ddqmwxj2pigvdc47er3uak2bkecoujn027hph8eyhfzkdzf1tm5p2cx7gbgbexhoflhcyo4fmzfok4unrde8cuxjoum4dj9lfeuoeki5yxc8fz4a7476z93aldhuo38fau7jxlqp5kwgdokffxtkrfq0i9czz9xdaw97ofu7ai1ycbd73u2xh1j3jixpt737jo8kstqw2xs106gzxbo8ytzaufu29pp6loeu4oxp42q65352s4bogugqhf8lgfo0bh9ph0hrwktcqx1igam5cgwe5o47ayv5vqjq2nqqxyzkxpz5nz8uhircoogkm4gr05pk1eiodjclykvaljflisqtu9v16i12atrsc3ucrzbuqt9zrnldixbq4wzo3edvw5satc9m34zckf20cbq5omi8vbvbume2bsfbjh7ng2p47xqz4sj8jip841tbkb922c4zbhff8v1d28bgk73bf8lyzpy848pg3sdh9guuq1id47jya5qals0g6rmtesaa1cczhobzs28q0ulrq5mk0rdyq6azfxnusy92mrev6ibjgfig657c5hi9cdrq86l4rahgximrcg5k31hyilip8apt3u4ljvjf52pn41jmq251i2bnky3q325wqur6vfgxak32e1szja4qdbj12xi8jyhxb3jh0bc70yath84xy598mk34rqkqsu61s4stuch9zohraopq3cjqmgc0cuwk7efne6n484gmayeb5h7oqupur3lgphveklhg44ir5yc663z3box1uaayn9tool4dafi7gw7euhjjzoxoua8joeoibq4253jtkiele1c1pfu7i32v8qvnkuh8zg3vi5gfv6t118lcz8zye45uziw5tarnvf9mbnsyy8d3y5k45nqw8xp0xww29cp56s3rj6mgvel2yl8g4q7i3doyrnh11wdsnzvtfho28wcqbtk3r4qs5hb91ww1wwx0sh54d0if7zgpqgb1jxib98xcd8efji5b7tib1osjlejsn1nhj360ujat9pit68u497qqpeft0itj2d1n3uxg6ku4ehwdvpqafi9xz9vcrd15ue1fq5lqm9x4gcbzr1htebcyjf54s4fdnuss55d587w2gnqt0r0ni5x30mdrg52t8rg8n74kmhft4n0zrobwsrvhiexy1n88rgrebj7clyex8t1g3ry0vj38afkuxbfz7h4avod7cwi6dmb1869ocmzlgu8pzag9yxircx3v9g02omb13ykz2rfttjjdm985fin8g93yvur94aob9e1phkiih2a2naihkgy37s9gu86p8z6mvbzk2a5ngzic8wtmi6rfoh5vmsuzb7isvixcdi94i44xqg1vwkedc0588kqfv0p2konlksmoqwj3zddtsu4cemqs9to0yxgpqojjth8j856lfb96ecj553ij3k2vnp8qy2mhzgq8f2dbk3cz0zlcwrcuq8l2fajmjwozy19o8txb9t76sqz4305jkcibhawb79oi0omxykr9n3hdfu7edp0f77nc0dp1o5f482kkcusg31kzb32m84ylg2c4rjgfcxjhk9m2o0w09rwtcs1p83qloy1qyhn8oc836e6dwgc87nm2ar0su2iqnhyo00f2i9gewuzgw2y0fjegplqgq48sib2vqq5wcvm1n1ppzphv7qq86t83l8u155wvxn92gmbo1cxmytdswigvim2gnjizttxl1fihs91wxml9kqqo3hvszz8y7w5dcyfj7mu4rczyuqm8f9pvh9xyyjr3btzkmwfhfps99wlipmo9mb3h14vdwg4x4mxveaic4pk6ars2kgktxe97p8hzj04h37jx76pdnhfk1zso6m4sxcdxfrptbh3p1y7jdn9nayu2wjnycn5l46pkwejj2ehd9ibx286618tan89lp4m8uztfwpvfgbpmkbqa1di9fomg03rl6agyiuoh5oq5lgbnm7p7q1gzrupx626lo5sqatoc4gky0vd6uwww341nq5fruwmbuyk92glmzt5jar3wfqohjn9kaeclt14wckznsd8s7p7lhm4tno0vuc4oqrk4fq26pkapdnlnmctqubqhd31sjhlmr8sz3jnpq56ngimxuo96xq08cgy0jj3pdatj2zvbcas9fhkwhfalnddz5yzjc1z0jo4awg3i8if4b30rmk7w6ngnsnrvd8xssw9oabvoe99r0oiflu1aql24z5raqiyfx6vch4auvgw7jw95g1me9qi5mu2zktmsqxtkdod7kveeooi1l2xjcrha32qapq9pcl8wrti2edabnqtf9kcts2dnasvoin678u93udkoza8c8m52i4a6gvz2m9sz6az2pev5xvyme9vgra0qhijhfekxosxzn6up4o931k7x83al5hd5oil2nx7fs0xs8xkliuzel3c3q2afg3famjthsyzl7v96qpg8dudcjwxesmf5txunqog73p2i2b65vdfwmqgg5gtrn29zvlmsz3gi8vyhqhvlib33to6nf7sa7hjzenhze2ic341uoszqxkkxh567i45feaorx8g25drfbmfxs108f5zewdd8u6rs4eki90nkdzx47o7fg1hbw09r4ztbisk1ugxzr5vq4v5t2csfe9r5kho5vh3p2fmjnvdpng217ur28fcvtkd0y33me7dge3c5e3bqjqjhqiar96nfhow1xxjy5sku3v0ncx6cabr1m04esa544fy19xzhy28wyma2f7i1xjjzy88zvjetf407081r3gum2hb6bqjxmoml7v5qqmlhma323oa8611za0sgbcqb1mr0ievectxl93vbvavop27a1c3i389p69cp7t1zyoqskug67rbt6azwdc2060oky69oxlm3zlj2lahefx4b87uk0b9srbi6g9zl6qe9x0208z54u6sghf584yz7eagpyu9ngkhsu22bj03byvoie9l1kkfxlxu9ltce786rabl8yx85617lch9rzyx5ye5o1zxu4k3g7cabspkjnbbpn5rqbarhhd6i555s9zktscefpzi337hfvycrkgu5afuhrbqc9vq7am6t1m5t3nm9tlsmfuurxx2fa7iqml7ovgwj1e9insztuinef6fn6syk048g4kucqi8dbozaajm7tlcndly99zymlzwl7q4h6tf9rj055nvsattbbnxubanf51z1kg0meysgavmelx3vizglj837q1xpsdc9x05gi6l0r0f33jrm40vrtsf23o319u2ea4oh57weq2b8hpmk73qbyxx0207tk861eurg1kh3v5rqdpq26b13p8ofdontms2sod6njlbz6li0c4m9zqyt9d9sjze2hudiuhsrnndf4mttzmi6sucjl2zslrgc0whwnvdj5w6bp630c7kd4lbv65cf0tvj22jqfbodmhxqryd5b5nfgwmk9rjnjjbu2heoikhdm4eps83vk4vueezvrmw8c6q7z10ro9t2h4amdwehwi9atg93834q6u3wk60cbp68cdx2ojq1cckawzp7k35u48hc5kpb46xdb0i8gxqaxnq4c15bd9en8rhlj3ftdo60szzq344jhucwaw38sgco5xmtpk5o9oe5dx5oqpr5cpqoc9gx4gf6ug6lo37j2aboz18uon9dwcs3q46iws5a2dytz1grguae1ccc7vi07mqah1e23urscz6r27s6hni66b7jqsaav08afzoz31839468bznacp7exn0nppg6nig4tqszmemvjrhtvjey1s40u0uipebupkcjaroda10fzql9zem5a7mlxclha5d5soet8e5d7gek8fpeaxehfmgn9skk5wmeoryblcfwy4gvelshgm41b4at59akpg8an6amgiqijw40fpsiy394kg8qno85884el5kju5zttlr7q1sroiffdqo66733hyon9ahbld533zhgg6yctk55u 00:07:38.840 21:15:02 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:38.840 21:15:02 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:38.840 21:15:02 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.840 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:07:38.840 [2024-11-28 21:15:02.474579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.840 [2024-11-28 21:15:02.474730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69732 ] 00:07:38.840 { 00:07:38.840 "subsystems": [ 00:07:38.840 { 00:07:38.840 "subsystem": "bdev", 00:07:38.840 "config": [ 00:07:38.840 { 00:07:38.840 "params": { 00:07:38.840 "trtype": "pcie", 00:07:38.840 "traddr": "0000:00:06.0", 00:07:38.840 "name": "Nvme0" 00:07:38.840 }, 00:07:38.840 "method": "bdev_nvme_attach_controller" 00:07:38.840 }, 00:07:38.840 { 00:07:38.840 "method": "bdev_wait_for_examine" 00:07:38.840 } 00:07:38.840 ] 00:07:38.840 } 00:07:38.840 ] 00:07:38.840 } 00:07:39.100 [2024-11-28 21:15:02.622077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.100 [2024-11-28 21:15:02.660754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.100  [2024-11-28T21:15:03.102Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:39.359 00:07:39.359 21:15:02 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:39.359 21:15:02 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:39.359 21:15:02 -- dd/common.sh@31 -- # xtrace_disable 00:07:39.359 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:07:39.359 [2024-11-28 21:15:03.015118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.359 [2024-11-28 21:15:03.015280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69750 ] 00:07:39.359 { 00:07:39.359 "subsystems": [ 00:07:39.359 { 00:07:39.359 "subsystem": "bdev", 00:07:39.359 "config": [ 00:07:39.359 { 00:07:39.359 "params": { 00:07:39.359 "trtype": "pcie", 00:07:39.359 "traddr": "0000:00:06.0", 00:07:39.359 "name": "Nvme0" 00:07:39.359 }, 00:07:39.359 "method": "bdev_nvme_attach_controller" 00:07:39.360 }, 00:07:39.360 { 00:07:39.360 "method": "bdev_wait_for_examine" 00:07:39.360 } 00:07:39.360 ] 00:07:39.360 } 00:07:39.360 ] 00:07:39.360 } 00:07:39.677 [2024-11-28 21:15:03.157232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.677 [2024-11-28 21:15:03.187214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.677  [2024-11-28T21:15:03.712Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:39.969 00:07:39.969 21:15:03 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:39.969 21:15:03 -- dd/basic_rw.sh@72 -- # [[ xlk9o7ec2yojy9i469o88awb9mimc9ooxkpsdch2a76t7drgyrsb4h4vkhv9b5th5up9ku4wxopj3i09udxe525m4xep8ve28t96snxgdi1ddqmwxj2pigvdc47er3uak2bkecoujn027hph8eyhfzkdzf1tm5p2cx7gbgbexhoflhcyo4fmzfok4unrde8cuxjoum4dj9lfeuoeki5yxc8fz4a7476z93aldhuo38fau7jxlqp5kwgdokffxtkrfq0i9czz9xdaw97ofu7ai1ycbd73u2xh1j3jixpt737jo8kstqw2xs106gzxbo8ytzaufu29pp6loeu4oxp42q65352s4bogugqhf8lgfo0bh9ph0hrwktcqx1igam5cgwe5o47ayv5vqjq2nqqxyzkxpz5nz8uhircoogkm4gr05pk1eiodjclykvaljflisqtu9v16i12atrsc3ucrzbuqt9zrnldixbq4wzo3edvw5satc9m34zckf20cbq5omi8vbvbume2bsfbjh7ng2p47xqz4sj8jip841tbkb922c4zbhff8v1d28bgk73bf8lyzpy848pg3sdh9guuq1id47jya5qals0g6rmtesaa1cczhobzs28q0ulrq5mk0rdyq6azfxnusy92mrev6ibjgfig657c5hi9cdrq86l4rahgximrcg5k31hyilip8apt3u4ljvjf52pn41jmq251i2bnky3q325wqur6vfgxak32e1szja4qdbj12xi8jyhxb3jh0bc70yath84xy598mk34rqkqsu61s4stuch9zohraopq3cjqmgc0cuwk7efne6n484gmayeb5h7oqupur3lgphveklhg44ir5yc663z3box1uaayn9tool4dafi7gw7euhjjzoxoua8joeoibq4253jtkiele1c1pfu7i32v8qvnkuh8zg3vi5gfv6t118lcz8zye45uziw5tarnvf9mbnsyy8d3y5k45nqw8xp0xww29cp56s3rj6mgvel2yl8g4q7i3doyrnh11wdsnzvtfho28wcqbtk3r4qs5hb91ww1wwx0sh54d0if7zgpqgb1jxib98xcd8efji5b7tib1osjlejsn1nhj360ujat9pit68u497qqpeft0itj2d1n3uxg6ku4ehwdvpqafi9xz9vcrd15ue1fq5lqm9x4gcbzr1htebcyjf54s4fdnuss55d587w2gnqt0r0ni5x30mdrg52t8rg8n74kmhft4n0zrobwsrvhiexy1n88rgrebj7clyex8t1g3ry0vj38afkuxbfz7h4avod7cwi6dmb1869ocmzlgu8pzag9yxircx3v9g02omb13ykz2rfttjjdm985fin8g93yvur94aob9e1phkiih2a2naihkgy37s9gu86p8z6mvbzk2a5ngzic8wtmi6rfoh5vmsuzb7isvixcdi94i44xqg1vwkedc0588kqfv0p2konlksmoqwj3zddtsu4cemqs9to0yxgpqojjth8j856lfb96ecj553ij3k2vnp8qy2mhzgq8f2dbk3cz0zlcwrcuq8l2fajmjwozy19o8txb9t76sqz4305jkcibhawb79oi0omxykr9n3hdfu7edp0f77nc0dp1o5f482kkcusg31kzb32m84ylg2c4rjgfcxjhk9m2o0w09rwtcs1p83qloy1qyhn8oc836e6dwgc87nm2ar0su2iqnhyo00f2i9gewuzgw2y0fjegplqgq48sib2vqq5wcvm1n1ppzphv7qq86t83l8u155wvxn92gmbo1cxmytdswigvim2gnjizttxl1fihs91wxml9kqqo3hvszz8y7w5dcyfj7mu4rczyuqm8f9pvh9xyyjr3btzkmwfhfps99wlipmo9mb3h14vdwg4x4mxveaic4pk6ars2kgktxe97p8hzj04h37jx76pdnhfk1zso6m4sxcdxfrptbh3p1y7jdn9nayu2wjnycn5l46pkwejj2ehd9ibx286618tan89lp4m8uztfwpvfgbpmkbqa1di9fomg03rl6agyiuoh5oq5lgbnm7p7q1gzrupx626lo5sqatoc4gky0vd6uwww341nq5fruwmbuyk92glmzt5jar3wfqohjn9kaeclt14wckznsd8s7p7lhm4tno0vuc4oqrk4fq26pkapdnlnmctqubqhd31sjhlmr8sz3jnpq56ngimxuo96xq08cgy0jj3pdatj2zvbcas9fhkwhfalnddz5yzjc1z0jo4awg3i8if4b30rmk7w6ngnsnrvd8xssw9oabvoe99r0oiflu1aql24z5raqiyfx6vch4auvgw7jw95g1me9qi5mu2zktmsqxtkdod7kveeooi1l2xjcrha32qapq9pcl8wrti2edabnqtf9kcts2dnasvoin678u93udkoza8c8m52i4a6gvz2m9sz6az2pev5xvyme9vgra0qhijhfekxosxzn6up4o931k7x83al5hd5oil2nx7fs0xs8xkliuzel3c3q2afg3famjthsyzl7v96qpg8dudcjwxesmf5txunqog73p2i2b65vdfwmqgg5gtrn29zvlmsz3gi8vyhqhvlib33to6nf7sa7hjzenhze2ic341uoszqxkkxh567i45feaorx8g25drfbmfxs108f5zewdd8u6rs4eki90nkdzx47o7fg1hbw09r4ztbisk1ugxzr5vq4v5t2csfe9r5kho5vh3p2fmjnvdpng217ur28fcvtkd0y33me7dge3c5e3bqjqjhqiar96nfhow1xxjy5sku3v0ncx6cabr1m04esa544fy19xzhy28wyma2f7i1xjjzy88zvjetf407081r3gum2hb6bqjxmoml7v5qqmlhma323oa8611za0sgbcqb1mr0ievectxl93vbvavop27a1c3i389p69cp7t1zyoqskug67rbt6azwdc2060oky69oxlm3zlj2lahefx4b87uk0b9srbi6g9zl6qe9x0208z54u6sghf584yz7eagpyu9ngkhsu22bj03byvoie9l1kkfxlxu9ltce786rabl8yx85617lch9rzyx5ye5o1zxu4k3g7cabspkjnbbpn5rqbarhhd6i555s9zktscefpzi337hfvycrkgu5afuhrbqc9vq7am6t1m5t3nm9tlsmfuurxx2fa7iqml7ovgwj1e9insztuinef6fn6syk048g4kucqi8dbozaajm7tlcndly99zymlzwl7q4h6tf9rj055nvsattbbnxubanf51z1kg0meysgavmelx3vizglj837q1xpsdc9x05gi6l0r0f33jrm40vrtsf23o319u2ea4oh57weq2b8hpmk73qbyxx0207tk861eurg1kh3v5rqdpq26b13p8ofdontms2sod6njlbz6li0c4m9zqyt9d9sjze2hudiuhsrnndf4mttzmi6sucjl2zslrgc0whwnvdj5w6bp630c7kd4lbv65cf0tvj22jqfbodmhxqryd5b5nfgwmk9rjnjjbu2heoikhdm4eps83vk4vueezvrmw8c6q7z10ro9t2h4amdwehwi9atg93834q6u3wk60cbp68cdx2ojq1cckawzp7k35u48hc5kpb46xdb0i8gxqaxnq4c15bd9en8rhlj3ftdo60szzq344jhucwaw38sgco5xmtpk5o9oe5dx5oqpr5cpqoc9gx4gf6ug6lo37j2aboz18uon9dwcs3q46iws5a2dytz1grguae1ccc7vi07mqah1e23urscz6r27s6hni66b7jqsaav08afzoz31839468bznacp7exn0nppg6nig4tqszmemvjrhtvjey1s40u0uipebupkcjaroda10fzql9zem5a7mlxclha5d5soet8e5d7gek8fpeaxehfmgn9skk5wmeoryblcfwy4gvelshgm41b4at59akpg8an6amgiqijw40fpsiy394kg8qno85884el5kju5zttlr7q1sroiffdqo66733hyon9ahbld533zhgg6yctk55u == \x\l\k\9\o\7\e\c\2\y\o\j\y\9\i\4\6\9\o\8\8\a\w\b\9\m\i\m\c\9\o\o\x\k\p\s\d\c\h\2\a\7\6\t\7\d\r\g\y\r\s\b\4\h\4\v\k\h\v\9\b\5\t\h\5\u\p\9\k\u\4\w\x\o\p\j\3\i\0\9\u\d\x\e\5\2\5\m\4\x\e\p\8\v\e\2\8\t\9\6\s\n\x\g\d\i\1\d\d\q\m\w\x\j\2\p\i\g\v\d\c\4\7\e\r\3\u\a\k\2\b\k\e\c\o\u\j\n\0\2\7\h\p\h\8\e\y\h\f\z\k\d\z\f\1\t\m\5\p\2\c\x\7\g\b\g\b\e\x\h\o\f\l\h\c\y\o\4\f\m\z\f\o\k\4\u\n\r\d\e\8\c\u\x\j\o\u\m\4\d\j\9\l\f\e\u\o\e\k\i\5\y\x\c\8\f\z\4\a\7\4\7\6\z\9\3\a\l\d\h\u\o\3\8\f\a\u\7\j\x\l\q\p\5\k\w\g\d\o\k\f\f\x\t\k\r\f\q\0\i\9\c\z\z\9\x\d\a\w\9\7\o\f\u\7\a\i\1\y\c\b\d\7\3\u\2\x\h\1\j\3\j\i\x\p\t\7\3\7\j\o\8\k\s\t\q\w\2\x\s\1\0\6\g\z\x\b\o\8\y\t\z\a\u\f\u\2\9\p\p\6\l\o\e\u\4\o\x\p\4\2\q\6\5\3\5\2\s\4\b\o\g\u\g\q\h\f\8\l\g\f\o\0\b\h\9\p\h\0\h\r\w\k\t\c\q\x\1\i\g\a\m\5\c\g\w\e\5\o\4\7\a\y\v\5\v\q\j\q\2\n\q\q\x\y\z\k\x\p\z\5\n\z\8\u\h\i\r\c\o\o\g\k\m\4\g\r\0\5\p\k\1\e\i\o\d\j\c\l\y\k\v\a\l\j\f\l\i\s\q\t\u\9\v\1\6\i\1\2\a\t\r\s\c\3\u\c\r\z\b\u\q\t\9\z\r\n\l\d\i\x\b\q\4\w\z\o\3\e\d\v\w\5\s\a\t\c\9\m\3\4\z\c\k\f\2\0\c\b\q\5\o\m\i\8\v\b\v\b\u\m\e\2\b\s\f\b\j\h\7\n\g\2\p\4\7\x\q\z\4\s\j\8\j\i\p\8\4\1\t\b\k\b\9\2\2\c\4\z\b\h\f\f\8\v\1\d\2\8\b\g\k\7\3\b\f\8\l\y\z\p\y\8\4\8\p\g\3\s\d\h\9\g\u\u\q\1\i\d\4\7\j\y\a\5\q\a\l\s\0\g\6\r\m\t\e\s\a\a\1\c\c\z\h\o\b\z\s\2\8\q\0\u\l\r\q\5\m\k\0\r\d\y\q\6\a\z\f\x\n\u\s\y\9\2\m\r\e\v\6\i\b\j\g\f\i\g\6\5\7\c\5\h\i\9\c\d\r\q\8\6\l\4\r\a\h\g\x\i\m\r\c\g\5\k\3\1\h\y\i\l\i\p\8\a\p\t\3\u\4\l\j\v\j\f\5\2\p\n\4\1\j\m\q\2\5\1\i\2\b\n\k\y\3\q\3\2\5\w\q\u\r\6\v\f\g\x\a\k\3\2\e\1\s\z\j\a\4\q\d\b\j\1\2\x\i\8\j\y\h\x\b\3\j\h\0\b\c\7\0\y\a\t\h\8\4\x\y\5\9\8\m\k\3\4\r\q\k\q\s\u\6\1\s\4\s\t\u\c\h\9\z\o\h\r\a\o\p\q\3\c\j\q\m\g\c\0\c\u\w\k\7\e\f\n\e\6\n\4\8\4\g\m\a\y\e\b\5\h\7\o\q\u\p\u\r\3\l\g\p\h\v\e\k\l\h\g\4\4\i\r\5\y\c\6\6\3\z\3\b\o\x\1\u\a\a\y\n\9\t\o\o\l\4\d\a\f\i\7\g\w\7\e\u\h\j\j\z\o\x\o\u\a\8\j\o\e\o\i\b\q\4\2\5\3\j\t\k\i\e\l\e\1\c\1\p\f\u\7\i\3\2\v\8\q\v\n\k\u\h\8\z\g\3\v\i\5\g\f\v\6\t\1\1\8\l\c\z\8\z\y\e\4\5\u\z\i\w\5\t\a\r\n\v\f\9\m\b\n\s\y\y\8\d\3\y\5\k\4\5\n\q\w\8\x\p\0\x\w\w\2\9\c\p\5\6\s\3\r\j\6\m\g\v\e\l\2\y\l\8\g\4\q\7\i\3\d\o\y\r\n\h\1\1\w\d\s\n\z\v\t\f\h\o\2\8\w\c\q\b\t\k\3\r\4\q\s\5\h\b\9\1\w\w\1\w\w\x\0\s\h\5\4\d\0\i\f\7\z\g\p\q\g\b\1\j\x\i\b\9\8\x\c\d\8\e\f\j\i\5\b\7\t\i\b\1\o\s\j\l\e\j\s\n\1\n\h\j\3\6\0\u\j\a\t\9\p\i\t\6\8\u\4\9\7\q\q\p\e\f\t\0\i\t\j\2\d\1\n\3\u\x\g\6\k\u\4\e\h\w\d\v\p\q\a\f\i\9\x\z\9\v\c\r\d\1\5\u\e\1\f\q\5\l\q\m\9\x\4\g\c\b\z\r\1\h\t\e\b\c\y\j\f\5\4\s\4\f\d\n\u\s\s\5\5\d\5\8\7\w\2\g\n\q\t\0\r\0\n\i\5\x\3\0\m\d\r\g\5\2\t\8\r\g\8\n\7\4\k\m\h\f\t\4\n\0\z\r\o\b\w\s\r\v\h\i\e\x\y\1\n\8\8\r\g\r\e\b\j\7\c\l\y\e\x\8\t\1\g\3\r\y\0\v\j\3\8\a\f\k\u\x\b\f\z\7\h\4\a\v\o\d\7\c\w\i\6\d\m\b\1\8\6\9\o\c\m\z\l\g\u\8\p\z\a\g\9\y\x\i\r\c\x\3\v\9\g\0\2\o\m\b\1\3\y\k\z\2\r\f\t\t\j\j\d\m\9\8\5\f\i\n\8\g\9\3\y\v\u\r\9\4\a\o\b\9\e\1\p\h\k\i\i\h\2\a\2\n\a\i\h\k\g\y\3\7\s\9\g\u\8\6\p\8\z\6\m\v\b\z\k\2\a\5\n\g\z\i\c\8\w\t\m\i\6\r\f\o\h\5\v\m\s\u\z\b\7\i\s\v\i\x\c\d\i\9\4\i\4\4\x\q\g\1\v\w\k\e\d\c\0\5\8\8\k\q\f\v\0\p\2\k\o\n\l\k\s\m\o\q\w\j\3\z\d\d\t\s\u\4\c\e\m\q\s\9\t\o\0\y\x\g\p\q\o\j\j\t\h\8\j\8\5\6\l\f\b\9\6\e\c\j\5\5\3\i\j\3\k\2\v\n\p\8\q\y\2\m\h\z\g\q\8\f\2\d\b\k\3\c\z\0\z\l\c\w\r\c\u\q\8\l\2\f\a\j\m\j\w\o\z\y\1\9\o\8\t\x\b\9\t\7\6\s\q\z\4\3\0\5\j\k\c\i\b\h\a\w\b\7\9\o\i\0\o\m\x\y\k\r\9\n\3\h\d\f\u\7\e\d\p\0\f\7\7\n\c\0\d\p\1\o\5\f\4\8\2\k\k\c\u\s\g\3\1\k\z\b\3\2\m\8\4\y\l\g\2\c\4\r\j\g\f\c\x\j\h\k\9\m\2\o\0\w\0\9\r\w\t\c\s\1\p\8\3\q\l\o\y\1\q\y\h\n\8\o\c\8\3\6\e\6\d\w\g\c\8\7\n\m\2\a\r\0\s\u\2\i\q\n\h\y\o\0\0\f\2\i\9\g\e\w\u\z\g\w\2\y\0\f\j\e\g\p\l\q\g\q\4\8\s\i\b\2\v\q\q\5\w\c\v\m\1\n\1\p\p\z\p\h\v\7\q\q\8\6\t\8\3\l\8\u\1\5\5\w\v\x\n\9\2\g\m\b\o\1\c\x\m\y\t\d\s\w\i\g\v\i\m\2\g\n\j\i\z\t\t\x\l\1\f\i\h\s\9\1\w\x\m\l\9\k\q\q\o\3\h\v\s\z\z\8\y\7\w\5\d\c\y\f\j\7\m\u\4\r\c\z\y\u\q\m\8\f\9\p\v\h\9\x\y\y\j\r\3\b\t\z\k\m\w\f\h\f\p\s\9\9\w\l\i\p\m\o\9\m\b\3\h\1\4\v\d\w\g\4\x\4\m\x\v\e\a\i\c\4\p\k\6\a\r\s\2\k\g\k\t\x\e\9\7\p\8\h\z\j\0\4\h\3\7\j\x\7\6\p\d\n\h\f\k\1\z\s\o\6\m\4\s\x\c\d\x\f\r\p\t\b\h\3\p\1\y\7\j\d\n\9\n\a\y\u\2\w\j\n\y\c\n\5\l\4\6\p\k\w\e\j\j\2\e\h\d\9\i\b\x\2\8\6\6\1\8\t\a\n\8\9\l\p\4\m\8\u\z\t\f\w\p\v\f\g\b\p\m\k\b\q\a\1\d\i\9\f\o\m\g\0\3\r\l\6\a\g\y\i\u\o\h\5\o\q\5\l\g\b\n\m\7\p\7\q\1\g\z\r\u\p\x\6\2\6\l\o\5\s\q\a\t\o\c\4\g\k\y\0\v\d\6\u\w\w\w\3\4\1\n\q\5\f\r\u\w\m\b\u\y\k\9\2\g\l\m\z\t\5\j\a\r\3\w\f\q\o\h\j\n\9\k\a\e\c\l\t\1\4\w\c\k\z\n\s\d\8\s\7\p\7\l\h\m\4\t\n\o\0\v\u\c\4\o\q\r\k\4\f\q\2\6\p\k\a\p\d\n\l\n\m\c\t\q\u\b\q\h\d\3\1\s\j\h\l\m\r\8\s\z\3\j\n\p\q\5\6\n\g\i\m\x\u\o\9\6\x\q\0\8\c\g\y\0\j\j\3\p\d\a\t\j\2\z\v\b\c\a\s\9\f\h\k\w\h\f\a\l\n\d\d\z\5\y\z\j\c\1\z\0\j\o\4\a\w\g\3\i\8\i\f\4\b\3\0\r\m\k\7\w\6\n\g\n\s\n\r\v\d\8\x\s\s\w\9\o\a\b\v\o\e\9\9\r\0\o\i\f\l\u\1\a\q\l\2\4\z\5\r\a\q\i\y\f\x\6\v\c\h\4\a\u\v\g\w\7\j\w\9\5\g\1\m\e\9\q\i\5\m\u\2\z\k\t\m\s\q\x\t\k\d\o\d\7\k\v\e\e\o\o\i\1\l\2\x\j\c\r\h\a\3\2\q\a\p\q\9\p\c\l\8\w\r\t\i\2\e\d\a\b\n\q\t\f\9\k\c\t\s\2\d\n\a\s\v\o\i\n\6\7\8\u\9\3\u\d\k\o\z\a\8\c\8\m\5\2\i\4\a\6\g\v\z\2\m\9\s\z\6\a\z\2\p\e\v\5\x\v\y\m\e\9\v\g\r\a\0\q\h\i\j\h\f\e\k\x\o\s\x\z\n\6\u\p\4\o\9\3\1\k\7\x\8\3\a\l\5\h\d\5\o\i\l\2\n\x\7\f\s\0\x\s\8\x\k\l\i\u\z\e\l\3\c\3\q\2\a\f\g\3\f\a\m\j\t\h\s\y\z\l\7\v\9\6\q\p\g\8\d\u\d\c\j\w\x\e\s\m\f\5\t\x\u\n\q\o\g\7\3\p\2\i\2\b\6\5\v\d\f\w\m\q\g\g\5\g\t\r\n\2\9\z\v\l\m\s\z\3\g\i\8\v\y\h\q\h\v\l\i\b\3\3\t\o\6\n\f\7\s\a\7\h\j\z\e\n\h\z\e\2\i\c\3\4\1\u\o\s\z\q\x\k\k\x\h\5\6\7\i\4\5\f\e\a\o\r\x\8\g\2\5\d\r\f\b\m\f\x\s\1\0\8\f\5\z\e\w\d\d\8\u\6\r\s\4\e\k\i\9\0\n\k\d\z\x\4\7\o\7\f\g\1\h\b\w\0\9\r\4\z\t\b\i\s\k\1\u\g\x\z\r\5\v\q\4\v\5\t\2\c\s\f\e\9\r\5\k\h\o\5\v\h\3\p\2\f\m\j\n\v\d\p\n\g\2\1\7\u\r\2\8\f\c\v\t\k\d\0\y\3\3\m\e\7\d\g\e\3\c\5\e\3\b\q\j\q\j\h\q\i\a\r\9\6\n\f\h\o\w\1\x\x\j\y\5\s\k\u\3\v\0\n\c\x\6\c\a\b\r\1\m\0\4\e\s\a\5\4\4\f\y\1\9\x\z\h\y\2\8\w\y\m\a\2\f\7\i\1\x\j\j\z\y\8\8\z\v\j\e\t\f\4\0\7\0\8\1\r\3\g\u\m\2\h\b\6\b\q\j\x\m\o\m\l\7\v\5\q\q\m\l\h\m\a\3\2\3\o\a\8\6\1\1\z\a\0\s\g\b\c\q\b\1\m\r\0\i\e\v\e\c\t\x\l\9\3\v\b\v\a\v\o\p\2\7\a\1\c\3\i\3\8\9\p\6\9\c\p\7\t\1\z\y\o\q\s\k\u\g\6\7\r\b\t\6\a\z\w\d\c\2\0\6\0\o\k\y\6\9\o\x\l\m\3\z\l\j\2\l\a\h\e\f\x\4\b\8\7\u\k\0\b\9\s\r\b\i\6\g\9\z\l\6\q\e\9\x\0\2\0\8\z\5\4\u\6\s\g\h\f\5\8\4\y\z\7\e\a\g\p\y\u\9\n\g\k\h\s\u\2\2\b\j\0\3\b\y\v\o\i\e\9\l\1\k\k\f\x\l\x\u\9\l\t\c\e\7\8\6\r\a\b\l\8\y\x\8\5\6\1\7\l\c\h\9\r\z\y\x\5\y\e\5\o\1\z\x\u\4\k\3\g\7\c\a\b\s\p\k\j\n\b\b\p\n\5\r\q\b\a\r\h\h\d\6\i\5\5\5\s\9\z\k\t\s\c\e\f\p\z\i\3\3\7\h\f\v\y\c\r\k\g\u\5\a\f\u\h\r\b\q\c\9\v\q\7\a\m\6\t\1\m\5\t\3\n\m\9\t\l\s\m\f\u\u\r\x\x\2\f\a\7\i\q\m\l\7\o\v\g\w\j\1\e\9\i\n\s\z\t\u\i\n\e\f\6\f\n\6\s\y\k\0\4\8\g\4\k\u\c\q\i\8\d\b\o\z\a\a\j\m\7\t\l\c\n\d\l\y\9\9\z\y\m\l\z\w\l\7\q\4\h\6\t\f\9\r\j\0\5\5\n\v\s\a\t\t\b\b\n\x\u\b\a\n\f\5\1\z\1\k\g\0\m\e\y\s\g\a\v\m\e\l\x\3\v\i\z\g\l\j\8\3\7\q\1\x\p\s\d\c\9\x\0\5\g\i\6\l\0\r\0\f\3\3\j\r\m\4\0\v\r\t\s\f\2\3\o\3\1\9\u\2\e\a\4\o\h\5\7\w\e\q\2\b\8\h\p\m\k\7\3\q\b\y\x\x\0\2\0\7\t\k\8\6\1\e\u\r\g\1\k\h\3\v\5\r\q\d\p\q\2\6\b\1\3\p\8\o\f\d\o\n\t\m\s\2\s\o\d\6\n\j\l\b\z\6\l\i\0\c\4\m\9\z\q\y\t\9\d\9\s\j\z\e\2\h\u\d\i\u\h\s\r\n\n\d\f\4\m\t\t\z\m\i\6\s\u\c\j\l\2\z\s\l\r\g\c\0\w\h\w\n\v\d\j\5\w\6\b\p\6\3\0\c\7\k\d\4\l\b\v\6\5\c\f\0\t\v\j\2\2\j\q\f\b\o\d\m\h\x\q\r\y\d\5\b\5\n\f\g\w\m\k\9\r\j\n\j\j\b\u\2\h\e\o\i\k\h\d\m\4\e\p\s\8\3\v\k\4\v\u\e\e\z\v\r\m\w\8\c\6\q\7\z\1\0\r\o\9\t\2\h\4\a\m\d\w\e\h\w\i\9\a\t\g\9\3\8\3\4\q\6\u\3\w\k\6\0\c\b\p\6\8\c\d\x\2\o\j\q\1\c\c\k\a\w\z\p\7\k\3\5\u\4\8\h\c\5\k\p\b\4\6\x\d\b\0\i\8\g\x\q\a\x\n\q\4\c\1\5\b\d\9\e\n\8\r\h\l\j\3\f\t\d\o\6\0\s\z\z\q\3\4\4\j\h\u\c\w\a\w\3\8\s\g\c\o\5\x\m\t\p\k\5\o\9\o\e\5\d\x\5\o\q\p\r\5\c\p\q\o\c\9\g\x\4\g\f\6\u\g\6\l\o\3\7\j\2\a\b\o\z\1\8\u\o\n\9\d\w\c\s\3\q\4\6\i\w\s\5\a\2\d\y\t\z\1\g\r\g\u\a\e\1\c\c\c\7\v\i\0\7\m\q\a\h\1\e\2\3\u\r\s\c\z\6\r\2\7\s\6\h\n\i\6\6\b\7\j\q\s\a\a\v\0\8\a\f\z\o\z\3\1\8\3\9\4\6\8\b\z\n\a\c\p\7\e\x\n\0\n\p\p\g\6\n\i\g\4\t\q\s\z\m\e\m\v\j\r\h\t\v\j\e\y\1\s\4\0\u\0\u\i\p\e\b\u\p\k\c\j\a\r\o\d\a\1\0\f\z\q\l\9\z\e\m\5\a\7\m\l\x\c\l\h\a\5\d\5\s\o\e\t\8\e\5\d\7\g\e\k\8\f\p\e\a\x\e\h\f\m\g\n\9\s\k\k\5\w\m\e\o\r\y\b\l\c\f\w\y\4\g\v\e\l\s\h\g\m\4\1\b\4\a\t\5\9\a\k\p\g\8\a\n\6\a\m\g\i\q\i\j\w\4\0\f\p\s\i\y\3\9\4\k\g\8\q\n\o\8\5\8\8\4\e\l\5\k\j\u\5\z\t\t\l\r\7\q\1\s\r\o\i\f\f\d\q\o\6\6\7\3\3\h\y\o\n\9\a\h\b\l\d\5\3\3\z\h\g\g\6\y\c\t\k\5\5\u ]] 00:07:39.969 00:07:39.969 real 0m1.083s 00:07:39.969 user 0m0.749s 00:07:39.969 sys 0m0.251s 00:07:39.969 21:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.969 21:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:39.969 ************************************ 00:07:39.969 END TEST dd_rw_offset 00:07:39.969 ************************************ 00:07:39.969 21:15:03 -- dd/basic_rw.sh@1 -- # cleanup 00:07:39.969 21:15:03 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:39.969 21:15:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.969 21:15:03 -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.970 21:15:03 -- dd/common.sh@12 -- # local size=0xffff 00:07:39.970 21:15:03 -- dd/common.sh@14 -- # local bs=1048576 00:07:39.970 21:15:03 -- dd/common.sh@15 -- # local count=1 00:07:39.970 21:15:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.970 21:15:03 -- dd/common.sh@18 -- # gen_conf 00:07:39.970 21:15:03 -- dd/common.sh@31 -- # xtrace_disable 00:07:39.970 21:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:39.970 [2024-11-28 21:15:03.532419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.970 [2024-11-28 21:15:03.532529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:07:39.970 { 00:07:39.970 "subsystems": [ 00:07:39.970 { 00:07:39.970 "subsystem": "bdev", 00:07:39.970 "config": [ 00:07:39.970 { 00:07:39.970 "params": { 00:07:39.970 "trtype": "pcie", 00:07:39.970 "traddr": "0000:00:06.0", 00:07:39.970 "name": "Nvme0" 00:07:39.970 }, 00:07:39.970 "method": "bdev_nvme_attach_controller" 00:07:39.970 }, 00:07:39.970 { 00:07:39.970 "method": "bdev_wait_for_examine" 00:07:39.970 } 00:07:39.970 ] 00:07:39.970 } 00:07:39.970 ] 00:07:39.970 } 00:07:39.970 [2024-11-28 21:15:03.669349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.970 [2024-11-28 21:15:03.700079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.229  [2024-11-28T21:15:04.231Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:40.488 00:07:40.488 21:15:03 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.488 00:07:40.488 real 0m14.249s 00:07:40.488 user 0m10.106s 00:07:40.488 sys 0m2.723s 00:07:40.488 21:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.488 ************************************ 00:07:40.488 END TEST spdk_dd_basic_rw 00:07:40.488 ************************************ 00:07:40.489 21:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 21:15:04 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:40.489 21:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.489 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 ************************************ 00:07:40.489 START TEST spdk_dd_posix 00:07:40.489 ************************************ 00:07:40.489 21:15:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:40.489 * Looking for test storage... 00:07:40.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:40.489 21:15:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.489 21:15:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.489 21:15:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.489 21:15:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.489 21:15:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.489 21:15:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.489 21:15:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.489 21:15:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.489 21:15:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.489 21:15:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.489 21:15:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.489 21:15:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.489 21:15:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.489 21:15:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.489 21:15:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.489 21:15:04 -- scripts/common.sh@344 -- # : 1 00:07:40.489 21:15:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.489 21:15:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.489 21:15:04 -- scripts/common.sh@364 -- # decimal 1 00:07:40.489 21:15:04 -- scripts/common.sh@352 -- # local d=1 00:07:40.489 21:15:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.489 21:15:04 -- scripts/common.sh@354 -- # echo 1 00:07:40.489 21:15:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.489 21:15:04 -- scripts/common.sh@365 -- # decimal 2 00:07:40.489 21:15:04 -- scripts/common.sh@352 -- # local d=2 00:07:40.489 21:15:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.489 21:15:04 -- scripts/common.sh@354 -- # echo 2 00:07:40.489 21:15:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.489 21:15:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.489 21:15:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.489 21:15:04 -- scripts/common.sh@367 -- # return 0 00:07:40.489 21:15:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.489 --rc genhtml_branch_coverage=1 00:07:40.489 --rc genhtml_function_coverage=1 00:07:40.489 --rc genhtml_legend=1 00:07:40.489 --rc geninfo_all_blocks=1 00:07:40.489 --rc geninfo_unexecuted_blocks=1 00:07:40.489 00:07:40.489 ' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.489 --rc genhtml_branch_coverage=1 00:07:40.489 --rc genhtml_function_coverage=1 00:07:40.489 --rc genhtml_legend=1 00:07:40.489 --rc geninfo_all_blocks=1 00:07:40.489 --rc geninfo_unexecuted_blocks=1 00:07:40.489 00:07:40.489 ' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.489 --rc genhtml_branch_coverage=1 00:07:40.489 --rc genhtml_function_coverage=1 00:07:40.489 --rc genhtml_legend=1 00:07:40.489 --rc geninfo_all_blocks=1 00:07:40.489 --rc geninfo_unexecuted_blocks=1 00:07:40.489 00:07:40.489 ' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.489 --rc genhtml_branch_coverage=1 00:07:40.489 --rc genhtml_function_coverage=1 00:07:40.489 --rc genhtml_legend=1 00:07:40.489 --rc geninfo_all_blocks=1 00:07:40.489 --rc geninfo_unexecuted_blocks=1 00:07:40.489 00:07:40.489 ' 00:07:40.489 21:15:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.489 21:15:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.489 21:15:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.489 21:15:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.489 21:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.489 21:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.489 21:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.489 21:15:04 -- paths/export.sh@5 -- # export PATH 00:07:40.489 21:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.489 21:15:04 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:40.489 21:15:04 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:40.489 21:15:04 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:40.489 21:15:04 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:40.489 21:15:04 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.489 21:15:04 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.489 21:15:04 -- dd/posix.sh@130 -- # tests 00:07:40.489 21:15:04 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:40.489 * First test run, liburing in use 00:07:40.489 21:15:04 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:40.489 21:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.489 21:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.489 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 ************************************ 00:07:40.749 START TEST dd_flag_append 00:07:40.749 ************************************ 00:07:40.749 21:15:04 -- common/autotest_common.sh@1114 -- # append 00:07:40.749 21:15:04 -- dd/posix.sh@16 -- # local dump0 00:07:40.749 21:15:04 -- dd/posix.sh@17 -- # local dump1 00:07:40.749 21:15:04 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:40.749 21:15:04 -- dd/common.sh@98 -- # xtrace_disable 00:07:40.749 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 21:15:04 -- dd/posix.sh@19 -- # dump0=a97gnst0kap8x7ax6v55xy7ct5lhhdnl 00:07:40.749 21:15:04 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:40.749 21:15:04 -- dd/common.sh@98 -- # xtrace_disable 00:07:40.749 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 21:15:04 -- dd/posix.sh@20 -- # dump1=su6efmihrykdqhkd93krnirq5vr8952h 00:07:40.749 21:15:04 -- dd/posix.sh@22 -- # printf %s a97gnst0kap8x7ax6v55xy7ct5lhhdnl 00:07:40.749 21:15:04 -- dd/posix.sh@23 -- # printf %s su6efmihrykdqhkd93krnirq5vr8952h 00:07:40.749 21:15:04 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:40.749 [2024-11-28 21:15:04.290545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.749 [2024-11-28 21:15:04.290636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69842 ] 00:07:40.749 [2024-11-28 21:15:04.427491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.749 [2024-11-28 21:15:04.466935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.008  [2024-11-28T21:15:04.751Z] Copying: 32/32 [B] (average 31 kBps) 00:07:41.008 00:07:41.008 21:15:04 -- dd/posix.sh@27 -- # [[ su6efmihrykdqhkd93krnirq5vr8952ha97gnst0kap8x7ax6v55xy7ct5lhhdnl == \s\u\6\e\f\m\i\h\r\y\k\d\q\h\k\d\9\3\k\r\n\i\r\q\5\v\r\8\9\5\2\h\a\9\7\g\n\s\t\0\k\a\p\8\x\7\a\x\6\v\5\5\x\y\7\c\t\5\l\h\h\d\n\l ]] 00:07:41.008 00:07:41.008 ************************************ 00:07:41.008 END TEST dd_flag_append 00:07:41.008 ************************************ 00:07:41.008 real 0m0.451s 00:07:41.008 user 0m0.219s 00:07:41.008 sys 0m0.106s 00:07:41.008 21:15:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.008 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.008 21:15:04 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:41.008 21:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.008 21:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.008 21:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.008 ************************************ 00:07:41.008 START TEST dd_flag_directory 00:07:41.008 ************************************ 00:07:41.008 21:15:04 -- common/autotest_common.sh@1114 -- # directory 00:07:41.008 21:15:04 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.008 21:15:04 -- common/autotest_common.sh@650 -- # local es=0 00:07:41.008 21:15:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.008 21:15:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.008 21:15:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.008 21:15:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.008 21:15:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.008 21:15:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.008 21:15:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.008 21:15:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.008 21:15:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.008 21:15:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.267 [2024-11-28 21:15:04.790356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.267 [2024-11-28 21:15:04.790628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69863 ] 00:07:41.267 [2024-11-28 21:15:04.930963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.267 [2024-11-28 21:15:04.970708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.528 [2024-11-28 21:15:05.019470] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:41.528 [2024-11-28 21:15:05.019584] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:41.528 [2024-11-28 21:15:05.019611] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.528 [2024-11-28 21:15:05.081906] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:41.528 21:15:05 -- common/autotest_common.sh@653 -- # es=236 00:07:41.528 21:15:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.528 21:15:05 -- common/autotest_common.sh@662 -- # es=108 00:07:41.528 21:15:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.528 21:15:05 -- common/autotest_common.sh@670 -- # es=1 00:07:41.528 21:15:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.528 21:15:05 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:41.528 21:15:05 -- common/autotest_common.sh@650 -- # local es=0 00:07:41.528 21:15:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:41.528 21:15:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.528 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.528 21:15:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.528 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.528 21:15:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.528 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.528 21:15:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.528 21:15:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.528 21:15:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:41.528 [2024-11-28 21:15:05.202901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.528 [2024-11-28 21:15:05.203022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69878 ] 00:07:41.786 [2024-11-28 21:15:05.342096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.786 [2024-11-28 21:15:05.380365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.786 [2024-11-28 21:15:05.428906] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:41.786 [2024-11-28 21:15:05.428968] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:41.786 [2024-11-28 21:15:05.428986] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.786 [2024-11-28 21:15:05.487217] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.045 21:15:05 -- common/autotest_common.sh@653 -- # es=236 00:07:42.045 21:15:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.045 21:15:05 -- common/autotest_common.sh@662 -- # es=108 00:07:42.045 21:15:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.045 21:15:05 -- common/autotest_common.sh@670 -- # es=1 00:07:42.045 21:15:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.045 00:07:42.045 real 0m0.809s 00:07:42.045 user 0m0.412s 00:07:42.045 sys 0m0.188s 00:07:42.045 21:15:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.045 21:15:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.045 ************************************ 00:07:42.045 END TEST dd_flag_directory 00:07:42.045 ************************************ 00:07:42.045 21:15:05 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:42.045 21:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.045 21:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.045 21:15:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.045 ************************************ 00:07:42.045 START TEST dd_flag_nofollow 00:07:42.045 ************************************ 00:07:42.045 21:15:05 -- common/autotest_common.sh@1114 -- # nofollow 00:07:42.045 21:15:05 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:42.045 21:15:05 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:42.045 21:15:05 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:42.045 21:15:05 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:42.045 21:15:05 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.046 21:15:05 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.046 21:15:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.046 21:15:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.046 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.046 21:15:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.046 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.046 21:15:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.046 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.046 21:15:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.046 21:15:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.046 21:15:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.046 [2024-11-28 21:15:05.654131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.046 [2024-11-28 21:15:05.654216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69901 ] 00:07:42.305 [2024-11-28 21:15:05.792493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.305 [2024-11-28 21:15:05.822141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.305 [2024-11-28 21:15:05.863055] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:42.305 [2024-11-28 21:15:05.863129] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:42.305 [2024-11-28 21:15:05.863159] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.305 [2024-11-28 21:15:05.923103] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.305 21:15:05 -- common/autotest_common.sh@653 -- # es=216 00:07:42.305 21:15:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.305 21:15:05 -- common/autotest_common.sh@662 -- # es=88 00:07:42.305 21:15:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.305 21:15:05 -- common/autotest_common.sh@670 -- # es=1 00:07:42.305 21:15:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.305 21:15:05 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.305 21:15:05 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.305 21:15:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.305 21:15:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.305 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.305 21:15:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.305 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.305 21:15:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.305 21:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.305 21:15:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.305 21:15:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.305 21:15:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.305 [2024-11-28 21:15:06.038822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.305 [2024-11-28 21:15:06.038919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69916 ] 00:07:42.565 [2024-11-28 21:15:06.175145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.565 [2024-11-28 21:15:06.204995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.565 [2024-11-28 21:15:06.244970] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:42.565 [2024-11-28 21:15:06.245067] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:42.565 [2024-11-28 21:15:06.245098] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.565 [2024-11-28 21:15:06.303404] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.824 21:15:06 -- common/autotest_common.sh@653 -- # es=216 00:07:42.824 21:15:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.824 21:15:06 -- common/autotest_common.sh@662 -- # es=88 00:07:42.824 21:15:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.824 21:15:06 -- common/autotest_common.sh@670 -- # es=1 00:07:42.824 21:15:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.824 21:15:06 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:42.824 21:15:06 -- dd/common.sh@98 -- # xtrace_disable 00:07:42.824 21:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:42.824 21:15:06 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.824 [2024-11-28 21:15:06.422815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.824 [2024-11-28 21:15:06.422912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:07:42.824 [2024-11-28 21:15:06.559199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.083 [2024-11-28 21:15:06.589770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.083  [2024-11-28T21:15:06.826Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.083 00:07:43.083 ************************************ 00:07:43.083 END TEST dd_flag_nofollow 00:07:43.083 ************************************ 00:07:43.083 21:15:06 -- dd/posix.sh@49 -- # [[ jf1rzadwifpf6tpkeo6umidblp9r8x3dgh9wn1t19rdikwj467itzp9nasdjbdv48psbmyba30b9n8gjt1sgqg0qnhsg7m624wixpbe3x0c133ellua3griweev901ung6g3lo5z8ti2q2bmt1afsaaxbrj8osfdfotx45jejlzoorgiqfek0zfvwrahf7q9ee0oqka8hgww38o53kb2nfbr57do7omksi7hpcvvscubd8xhh52phxatoefcfxgoac35g7155vzbiw2te444k8modw82v6rvgzcs6o4dmhch3rlm082y76gbkdgkfqpzkmampc1lxojkibyq439mhazc2uxy7292fvw9yzsy2hnxv8kh6qt1npsu9idrjofed99dx04g3se6di8x4s0ucjaa4cy4bt9bctd373uiiwb60nfkwlqsr47npwxdurlhsgfnb4q1mct1y5omvvkkfgzd9cdwh7ati33zl9rwoo4i1zl0q1u3exfts3npgb4e == \j\f\1\r\z\a\d\w\i\f\p\f\6\t\p\k\e\o\6\u\m\i\d\b\l\p\9\r\8\x\3\d\g\h\9\w\n\1\t\1\9\r\d\i\k\w\j\4\6\7\i\t\z\p\9\n\a\s\d\j\b\d\v\4\8\p\s\b\m\y\b\a\3\0\b\9\n\8\g\j\t\1\s\g\q\g\0\q\n\h\s\g\7\m\6\2\4\w\i\x\p\b\e\3\x\0\c\1\3\3\e\l\l\u\a\3\g\r\i\w\e\e\v\9\0\1\u\n\g\6\g\3\l\o\5\z\8\t\i\2\q\2\b\m\t\1\a\f\s\a\a\x\b\r\j\8\o\s\f\d\f\o\t\x\4\5\j\e\j\l\z\o\o\r\g\i\q\f\e\k\0\z\f\v\w\r\a\h\f\7\q\9\e\e\0\o\q\k\a\8\h\g\w\w\3\8\o\5\3\k\b\2\n\f\b\r\5\7\d\o\7\o\m\k\s\i\7\h\p\c\v\v\s\c\u\b\d\8\x\h\h\5\2\p\h\x\a\t\o\e\f\c\f\x\g\o\a\c\3\5\g\7\1\5\5\v\z\b\i\w\2\t\e\4\4\4\k\8\m\o\d\w\8\2\v\6\r\v\g\z\c\s\6\o\4\d\m\h\c\h\3\r\l\m\0\8\2\y\7\6\g\b\k\d\g\k\f\q\p\z\k\m\a\m\p\c\1\l\x\o\j\k\i\b\y\q\4\3\9\m\h\a\z\c\2\u\x\y\7\2\9\2\f\v\w\9\y\z\s\y\2\h\n\x\v\8\k\h\6\q\t\1\n\p\s\u\9\i\d\r\j\o\f\e\d\9\9\d\x\0\4\g\3\s\e\6\d\i\8\x\4\s\0\u\c\j\a\a\4\c\y\4\b\t\9\b\c\t\d\3\7\3\u\i\i\w\b\6\0\n\f\k\w\l\q\s\r\4\7\n\p\w\x\d\u\r\l\h\s\g\f\n\b\4\q\1\m\c\t\1\y\5\o\m\v\v\k\k\f\g\z\d\9\c\d\w\h\7\a\t\i\3\3\z\l\9\r\w\o\o\4\i\1\z\l\0\q\1\u\3\e\x\f\t\s\3\n\p\g\b\4\e ]] 00:07:43.083 00:07:43.083 real 0m1.174s 00:07:43.083 user 0m0.555s 00:07:43.083 sys 0m0.293s 00:07:43.083 21:15:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.083 21:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.083 21:15:06 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:43.083 21:15:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.083 21:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.083 21:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.083 ************************************ 00:07:43.083 START TEST dd_flag_noatime 00:07:43.083 ************************************ 00:07:43.083 21:15:06 -- common/autotest_common.sh@1114 -- # noatime 00:07:43.083 21:15:06 -- dd/posix.sh@53 -- # local atime_if 00:07:43.083 21:15:06 -- dd/posix.sh@54 -- # local atime_of 00:07:43.083 21:15:06 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:43.084 21:15:06 -- dd/common.sh@98 -- # xtrace_disable 00:07:43.084 21:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.342 21:15:06 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.342 21:15:06 -- dd/posix.sh@60 -- # atime_if=1732828506 00:07:43.342 21:15:06 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.342 21:15:06 -- dd/posix.sh@61 -- # atime_of=1732828506 00:07:43.342 21:15:06 -- dd/posix.sh@66 -- # sleep 1 00:07:44.277 21:15:07 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.277 [2024-11-28 21:15:07.894587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.277 [2024-11-28 21:15:07.894685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69953 ] 00:07:44.537 [2024-11-28 21:15:08.030640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.537 [2024-11-28 21:15:08.070586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.537  [2024-11-28T21:15:08.280Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.537 00:07:44.537 21:15:08 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.537 21:15:08 -- dd/posix.sh@69 -- # (( atime_if == 1732828506 )) 00:07:44.537 21:15:08 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.537 21:15:08 -- dd/posix.sh@70 -- # (( atime_of == 1732828506 )) 00:07:44.537 21:15:08 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.796 [2024-11-28 21:15:08.311326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.796 [2024-11-28 21:15:08.311421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69970 ] 00:07:44.796 [2024-11-28 21:15:08.447462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.796 [2024-11-28 21:15:08.477094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.796  [2024-11-28T21:15:08.798Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.055 00:07:45.056 21:15:08 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.056 ************************************ 00:07:45.056 END TEST dd_flag_noatime 00:07:45.056 ************************************ 00:07:45.056 21:15:08 -- dd/posix.sh@73 -- # (( atime_if < 1732828508 )) 00:07:45.056 00:07:45.056 real 0m1.840s 00:07:45.056 user 0m0.398s 00:07:45.056 sys 0m0.205s 00:07:45.056 21:15:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.056 21:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:45.056 21:15:08 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:45.056 21:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:45.056 21:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.056 21:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:45.056 ************************************ 00:07:45.056 START TEST dd_flags_misc 00:07:45.056 ************************************ 00:07:45.056 21:15:08 -- common/autotest_common.sh@1114 -- # io 00:07:45.056 21:15:08 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:45.056 21:15:08 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:45.056 21:15:08 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:45.056 21:15:08 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:45.056 21:15:08 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:45.056 21:15:08 -- dd/common.sh@98 -- # xtrace_disable 00:07:45.056 21:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:45.056 21:15:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.056 21:15:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:45.056 [2024-11-28 21:15:08.761944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.056 [2024-11-28 21:15:08.762053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69991 ] 00:07:45.314 [2024-11-28 21:15:08.884593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.314 [2024-11-28 21:15:08.914662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.314  [2024-11-28T21:15:09.316Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.573 00:07:45.574 21:15:09 -- dd/posix.sh@93 -- # [[ qg1tlax653yoqer1ownzm4zjapklrw6h5vag4ko2kogxcu6ypgejxdqs6u4v6c546r7vixjlgmhblnnnybk1zpug6ltjf50esa2rb2wwibvrvh2ooynesq9d7y9wo10gd88peo8ff305sbu3ty3db3lsvq1rujk4i4t8rrwuml91frh62hrvx6srpmc6u6brbpqdazeo1td11l4cyi0xpvyj4gyx3g8lwcrjwodocgsjpbxgx3mwk67tio5zrip4p3qw2e8mk6565olruxread9ezbgbx4ocdmeaosu8hc4e2vp429sq9s2622luigu68w6t4zjbbmyb9qgwxqtg9ujyy8lwkwueva39hr61psg5q59jgnfg1olflt6wdlbi7or1ksmuveyna5pn7k6jwqe0w7n0fur9tfrt2tjnkqsf3yid96mfz5ll4s142tx12mf4zla0pghdp6g79g7g5ky8l1e7ok45dler6y6mo49hjmdypalssqqlwo0gv7hh == \q\g\1\t\l\a\x\6\5\3\y\o\q\e\r\1\o\w\n\z\m\4\z\j\a\p\k\l\r\w\6\h\5\v\a\g\4\k\o\2\k\o\g\x\c\u\6\y\p\g\e\j\x\d\q\s\6\u\4\v\6\c\5\4\6\r\7\v\i\x\j\l\g\m\h\b\l\n\n\n\y\b\k\1\z\p\u\g\6\l\t\j\f\5\0\e\s\a\2\r\b\2\w\w\i\b\v\r\v\h\2\o\o\y\n\e\s\q\9\d\7\y\9\w\o\1\0\g\d\8\8\p\e\o\8\f\f\3\0\5\s\b\u\3\t\y\3\d\b\3\l\s\v\q\1\r\u\j\k\4\i\4\t\8\r\r\w\u\m\l\9\1\f\r\h\6\2\h\r\v\x\6\s\r\p\m\c\6\u\6\b\r\b\p\q\d\a\z\e\o\1\t\d\1\1\l\4\c\y\i\0\x\p\v\y\j\4\g\y\x\3\g\8\l\w\c\r\j\w\o\d\o\c\g\s\j\p\b\x\g\x\3\m\w\k\6\7\t\i\o\5\z\r\i\p\4\p\3\q\w\2\e\8\m\k\6\5\6\5\o\l\r\u\x\r\e\a\d\9\e\z\b\g\b\x\4\o\c\d\m\e\a\o\s\u\8\h\c\4\e\2\v\p\4\2\9\s\q\9\s\2\6\2\2\l\u\i\g\u\6\8\w\6\t\4\z\j\b\b\m\y\b\9\q\g\w\x\q\t\g\9\u\j\y\y\8\l\w\k\w\u\e\v\a\3\9\h\r\6\1\p\s\g\5\q\5\9\j\g\n\f\g\1\o\l\f\l\t\6\w\d\l\b\i\7\o\r\1\k\s\m\u\v\e\y\n\a\5\p\n\7\k\6\j\w\q\e\0\w\7\n\0\f\u\r\9\t\f\r\t\2\t\j\n\k\q\s\f\3\y\i\d\9\6\m\f\z\5\l\l\4\s\1\4\2\t\x\1\2\m\f\4\z\l\a\0\p\g\h\d\p\6\g\7\9\g\7\g\5\k\y\8\l\1\e\7\o\k\4\5\d\l\e\r\6\y\6\m\o\4\9\h\j\m\d\y\p\a\l\s\s\q\q\l\w\o\0\g\v\7\h\h ]] 00:07:45.574 21:15:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.574 21:15:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:45.574 [2024-11-28 21:15:09.155992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.574 [2024-11-28 21:15:09.156111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70004 ] 00:07:45.574 [2024-11-28 21:15:09.285025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.574 [2024-11-28 21:15:09.315682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.833  [2024-11-28T21:15:09.576Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.833 00:07:45.833 21:15:09 -- dd/posix.sh@93 -- # [[ qg1tlax653yoqer1ownzm4zjapklrw6h5vag4ko2kogxcu6ypgejxdqs6u4v6c546r7vixjlgmhblnnnybk1zpug6ltjf50esa2rb2wwibvrvh2ooynesq9d7y9wo10gd88peo8ff305sbu3ty3db3lsvq1rujk4i4t8rrwuml91frh62hrvx6srpmc6u6brbpqdazeo1td11l4cyi0xpvyj4gyx3g8lwcrjwodocgsjpbxgx3mwk67tio5zrip4p3qw2e8mk6565olruxread9ezbgbx4ocdmeaosu8hc4e2vp429sq9s2622luigu68w6t4zjbbmyb9qgwxqtg9ujyy8lwkwueva39hr61psg5q59jgnfg1olflt6wdlbi7or1ksmuveyna5pn7k6jwqe0w7n0fur9tfrt2tjnkqsf3yid96mfz5ll4s142tx12mf4zla0pghdp6g79g7g5ky8l1e7ok45dler6y6mo49hjmdypalssqqlwo0gv7hh == \q\g\1\t\l\a\x\6\5\3\y\o\q\e\r\1\o\w\n\z\m\4\z\j\a\p\k\l\r\w\6\h\5\v\a\g\4\k\o\2\k\o\g\x\c\u\6\y\p\g\e\j\x\d\q\s\6\u\4\v\6\c\5\4\6\r\7\v\i\x\j\l\g\m\h\b\l\n\n\n\y\b\k\1\z\p\u\g\6\l\t\j\f\5\0\e\s\a\2\r\b\2\w\w\i\b\v\r\v\h\2\o\o\y\n\e\s\q\9\d\7\y\9\w\o\1\0\g\d\8\8\p\e\o\8\f\f\3\0\5\s\b\u\3\t\y\3\d\b\3\l\s\v\q\1\r\u\j\k\4\i\4\t\8\r\r\w\u\m\l\9\1\f\r\h\6\2\h\r\v\x\6\s\r\p\m\c\6\u\6\b\r\b\p\q\d\a\z\e\o\1\t\d\1\1\l\4\c\y\i\0\x\p\v\y\j\4\g\y\x\3\g\8\l\w\c\r\j\w\o\d\o\c\g\s\j\p\b\x\g\x\3\m\w\k\6\7\t\i\o\5\z\r\i\p\4\p\3\q\w\2\e\8\m\k\6\5\6\5\o\l\r\u\x\r\e\a\d\9\e\z\b\g\b\x\4\o\c\d\m\e\a\o\s\u\8\h\c\4\e\2\v\p\4\2\9\s\q\9\s\2\6\2\2\l\u\i\g\u\6\8\w\6\t\4\z\j\b\b\m\y\b\9\q\g\w\x\q\t\g\9\u\j\y\y\8\l\w\k\w\u\e\v\a\3\9\h\r\6\1\p\s\g\5\q\5\9\j\g\n\f\g\1\o\l\f\l\t\6\w\d\l\b\i\7\o\r\1\k\s\m\u\v\e\y\n\a\5\p\n\7\k\6\j\w\q\e\0\w\7\n\0\f\u\r\9\t\f\r\t\2\t\j\n\k\q\s\f\3\y\i\d\9\6\m\f\z\5\l\l\4\s\1\4\2\t\x\1\2\m\f\4\z\l\a\0\p\g\h\d\p\6\g\7\9\g\7\g\5\k\y\8\l\1\e\7\o\k\4\5\d\l\e\r\6\y\6\m\o\4\9\h\j\m\d\y\p\a\l\s\s\q\q\l\w\o\0\g\v\7\h\h ]] 00:07:45.833 21:15:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.833 21:15:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:45.833 [2024-11-28 21:15:09.516584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.833 [2024-11-28 21:15:09.516667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70006 ] 00:07:46.091 [2024-11-28 21:15:09.639294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.091 [2024-11-28 21:15:09.669085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.091  [2024-11-28T21:15:10.094Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.351 00:07:46.351 21:15:09 -- dd/posix.sh@93 -- # [[ qg1tlax653yoqer1ownzm4zjapklrw6h5vag4ko2kogxcu6ypgejxdqs6u4v6c546r7vixjlgmhblnnnybk1zpug6ltjf50esa2rb2wwibvrvh2ooynesq9d7y9wo10gd88peo8ff305sbu3ty3db3lsvq1rujk4i4t8rrwuml91frh62hrvx6srpmc6u6brbpqdazeo1td11l4cyi0xpvyj4gyx3g8lwcrjwodocgsjpbxgx3mwk67tio5zrip4p3qw2e8mk6565olruxread9ezbgbx4ocdmeaosu8hc4e2vp429sq9s2622luigu68w6t4zjbbmyb9qgwxqtg9ujyy8lwkwueva39hr61psg5q59jgnfg1olflt6wdlbi7or1ksmuveyna5pn7k6jwqe0w7n0fur9tfrt2tjnkqsf3yid96mfz5ll4s142tx12mf4zla0pghdp6g79g7g5ky8l1e7ok45dler6y6mo49hjmdypalssqqlwo0gv7hh == \q\g\1\t\l\a\x\6\5\3\y\o\q\e\r\1\o\w\n\z\m\4\z\j\a\p\k\l\r\w\6\h\5\v\a\g\4\k\o\2\k\o\g\x\c\u\6\y\p\g\e\j\x\d\q\s\6\u\4\v\6\c\5\4\6\r\7\v\i\x\j\l\g\m\h\b\l\n\n\n\y\b\k\1\z\p\u\g\6\l\t\j\f\5\0\e\s\a\2\r\b\2\w\w\i\b\v\r\v\h\2\o\o\y\n\e\s\q\9\d\7\y\9\w\o\1\0\g\d\8\8\p\e\o\8\f\f\3\0\5\s\b\u\3\t\y\3\d\b\3\l\s\v\q\1\r\u\j\k\4\i\4\t\8\r\r\w\u\m\l\9\1\f\r\h\6\2\h\r\v\x\6\s\r\p\m\c\6\u\6\b\r\b\p\q\d\a\z\e\o\1\t\d\1\1\l\4\c\y\i\0\x\p\v\y\j\4\g\y\x\3\g\8\l\w\c\r\j\w\o\d\o\c\g\s\j\p\b\x\g\x\3\m\w\k\6\7\t\i\o\5\z\r\i\p\4\p\3\q\w\2\e\8\m\k\6\5\6\5\o\l\r\u\x\r\e\a\d\9\e\z\b\g\b\x\4\o\c\d\m\e\a\o\s\u\8\h\c\4\e\2\v\p\4\2\9\s\q\9\s\2\6\2\2\l\u\i\g\u\6\8\w\6\t\4\z\j\b\b\m\y\b\9\q\g\w\x\q\t\g\9\u\j\y\y\8\l\w\k\w\u\e\v\a\3\9\h\r\6\1\p\s\g\5\q\5\9\j\g\n\f\g\1\o\l\f\l\t\6\w\d\l\b\i\7\o\r\1\k\s\m\u\v\e\y\n\a\5\p\n\7\k\6\j\w\q\e\0\w\7\n\0\f\u\r\9\t\f\r\t\2\t\j\n\k\q\s\f\3\y\i\d\9\6\m\f\z\5\l\l\4\s\1\4\2\t\x\1\2\m\f\4\z\l\a\0\p\g\h\d\p\6\g\7\9\g\7\g\5\k\y\8\l\1\e\7\o\k\4\5\d\l\e\r\6\y\6\m\o\4\9\h\j\m\d\y\p\a\l\s\s\q\q\l\w\o\0\g\v\7\h\h ]] 00:07:46.351 21:15:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.351 21:15:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:46.351 [2024-11-28 21:15:09.885586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.351 [2024-11-28 21:15:09.885806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70008 ] 00:07:46.351 [2024-11-28 21:15:10.011966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.351 [2024-11-28 21:15:10.045620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.351  [2024-11-28T21:15:10.353Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.610 00:07:46.610 21:15:10 -- dd/posix.sh@93 -- # [[ qg1tlax653yoqer1ownzm4zjapklrw6h5vag4ko2kogxcu6ypgejxdqs6u4v6c546r7vixjlgmhblnnnybk1zpug6ltjf50esa2rb2wwibvrvh2ooynesq9d7y9wo10gd88peo8ff305sbu3ty3db3lsvq1rujk4i4t8rrwuml91frh62hrvx6srpmc6u6brbpqdazeo1td11l4cyi0xpvyj4gyx3g8lwcrjwodocgsjpbxgx3mwk67tio5zrip4p3qw2e8mk6565olruxread9ezbgbx4ocdmeaosu8hc4e2vp429sq9s2622luigu68w6t4zjbbmyb9qgwxqtg9ujyy8lwkwueva39hr61psg5q59jgnfg1olflt6wdlbi7or1ksmuveyna5pn7k6jwqe0w7n0fur9tfrt2tjnkqsf3yid96mfz5ll4s142tx12mf4zla0pghdp6g79g7g5ky8l1e7ok45dler6y6mo49hjmdypalssqqlwo0gv7hh == \q\g\1\t\l\a\x\6\5\3\y\o\q\e\r\1\o\w\n\z\m\4\z\j\a\p\k\l\r\w\6\h\5\v\a\g\4\k\o\2\k\o\g\x\c\u\6\y\p\g\e\j\x\d\q\s\6\u\4\v\6\c\5\4\6\r\7\v\i\x\j\l\g\m\h\b\l\n\n\n\y\b\k\1\z\p\u\g\6\l\t\j\f\5\0\e\s\a\2\r\b\2\w\w\i\b\v\r\v\h\2\o\o\y\n\e\s\q\9\d\7\y\9\w\o\1\0\g\d\8\8\p\e\o\8\f\f\3\0\5\s\b\u\3\t\y\3\d\b\3\l\s\v\q\1\r\u\j\k\4\i\4\t\8\r\r\w\u\m\l\9\1\f\r\h\6\2\h\r\v\x\6\s\r\p\m\c\6\u\6\b\r\b\p\q\d\a\z\e\o\1\t\d\1\1\l\4\c\y\i\0\x\p\v\y\j\4\g\y\x\3\g\8\l\w\c\r\j\w\o\d\o\c\g\s\j\p\b\x\g\x\3\m\w\k\6\7\t\i\o\5\z\r\i\p\4\p\3\q\w\2\e\8\m\k\6\5\6\5\o\l\r\u\x\r\e\a\d\9\e\z\b\g\b\x\4\o\c\d\m\e\a\o\s\u\8\h\c\4\e\2\v\p\4\2\9\s\q\9\s\2\6\2\2\l\u\i\g\u\6\8\w\6\t\4\z\j\b\b\m\y\b\9\q\g\w\x\q\t\g\9\u\j\y\y\8\l\w\k\w\u\e\v\a\3\9\h\r\6\1\p\s\g\5\q\5\9\j\g\n\f\g\1\o\l\f\l\t\6\w\d\l\b\i\7\o\r\1\k\s\m\u\v\e\y\n\a\5\p\n\7\k\6\j\w\q\e\0\w\7\n\0\f\u\r\9\t\f\r\t\2\t\j\n\k\q\s\f\3\y\i\d\9\6\m\f\z\5\l\l\4\s\1\4\2\t\x\1\2\m\f\4\z\l\a\0\p\g\h\d\p\6\g\7\9\g\7\g\5\k\y\8\l\1\e\7\o\k\4\5\d\l\e\r\6\y\6\m\o\4\9\h\j\m\d\y\p\a\l\s\s\q\q\l\w\o\0\g\v\7\h\h ]] 00:07:46.610 21:15:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:46.610 21:15:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:46.610 21:15:10 -- dd/common.sh@98 -- # xtrace_disable 00:07:46.610 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:07:46.610 21:15:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.610 21:15:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:46.610 [2024-11-28 21:15:10.276456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.610 [2024-11-28 21:15:10.276539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70021 ] 00:07:46.869 [2024-11-28 21:15:10.398868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.869 [2024-11-28 21:15:10.428509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.869  [2024-11-28T21:15:10.612Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.869 00:07:46.869 21:15:10 -- dd/posix.sh@93 -- # [[ 37byktnikt31c5bje49gql5uodmil74klk3h4wh9jde03b6d3yltvh3uqjv24z9h12iij2tidte4sydgtj0yyzelb1t9ipyc51p1sg8x996wtthafv37d7x3y1ytmm1l00ksgs938i57r8baheexickldl372okoa2g053dzc4wldg2ob9c65hm4cshoo2if7vowoa8dn0oxl4mmawzk4s1wznrwncj29nngrvhu9b5bgqzgcsze4v34q8dujqcz73i6smucxeilhdgausxx9qdmfm22d5we327fti52meo0qi75lcphftvcekemqw0h1u6jja6yrpyuywbn9skaef7iro3blpfuyt2tfw43gcwd0y4o6dahw0qwkbp1ct1e0mb9vc05bfdlsmfohmqb2tpmwgp1rjs0ae8k3dwvlfd61p7o9bgnniq6jdybioxw4judrtaukxkc6factq8gp20ryl7jhcsqj7j61n9it80aelxbp6jckl0k47lzarl9 == \3\7\b\y\k\t\n\i\k\t\3\1\c\5\b\j\e\4\9\g\q\l\5\u\o\d\m\i\l\7\4\k\l\k\3\h\4\w\h\9\j\d\e\0\3\b\6\d\3\y\l\t\v\h\3\u\q\j\v\2\4\z\9\h\1\2\i\i\j\2\t\i\d\t\e\4\s\y\d\g\t\j\0\y\y\z\e\l\b\1\t\9\i\p\y\c\5\1\p\1\s\g\8\x\9\9\6\w\t\t\h\a\f\v\3\7\d\7\x\3\y\1\y\t\m\m\1\l\0\0\k\s\g\s\9\3\8\i\5\7\r\8\b\a\h\e\e\x\i\c\k\l\d\l\3\7\2\o\k\o\a\2\g\0\5\3\d\z\c\4\w\l\d\g\2\o\b\9\c\6\5\h\m\4\c\s\h\o\o\2\i\f\7\v\o\w\o\a\8\d\n\0\o\x\l\4\m\m\a\w\z\k\4\s\1\w\z\n\r\w\n\c\j\2\9\n\n\g\r\v\h\u\9\b\5\b\g\q\z\g\c\s\z\e\4\v\3\4\q\8\d\u\j\q\c\z\7\3\i\6\s\m\u\c\x\e\i\l\h\d\g\a\u\s\x\x\9\q\d\m\f\m\2\2\d\5\w\e\3\2\7\f\t\i\5\2\m\e\o\0\q\i\7\5\l\c\p\h\f\t\v\c\e\k\e\m\q\w\0\h\1\u\6\j\j\a\6\y\r\p\y\u\y\w\b\n\9\s\k\a\e\f\7\i\r\o\3\b\l\p\f\u\y\t\2\t\f\w\4\3\g\c\w\d\0\y\4\o\6\d\a\h\w\0\q\w\k\b\p\1\c\t\1\e\0\m\b\9\v\c\0\5\b\f\d\l\s\m\f\o\h\m\q\b\2\t\p\m\w\g\p\1\r\j\s\0\a\e\8\k\3\d\w\v\l\f\d\6\1\p\7\o\9\b\g\n\n\i\q\6\j\d\y\b\i\o\x\w\4\j\u\d\r\t\a\u\k\x\k\c\6\f\a\c\t\q\8\g\p\2\0\r\y\l\7\j\h\c\s\q\j\7\j\6\1\n\9\i\t\8\0\a\e\l\x\b\p\6\j\c\k\l\0\k\4\7\l\z\a\r\l\9 ]] 00:07:46.869 21:15:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.869 21:15:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:47.128 [2024-11-28 21:15:10.630743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.128 [2024-11-28 21:15:10.630960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70023 ] 00:07:47.128 [2024-11-28 21:15:10.753264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.128 [2024-11-28 21:15:10.782964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.128  [2024-11-28T21:15:11.129Z] Copying: 512/512 [B] (average 500 kBps) 00:07:47.386 00:07:47.386 21:15:10 -- dd/posix.sh@93 -- # [[ 37byktnikt31c5bje49gql5uodmil74klk3h4wh9jde03b6d3yltvh3uqjv24z9h12iij2tidte4sydgtj0yyzelb1t9ipyc51p1sg8x996wtthafv37d7x3y1ytmm1l00ksgs938i57r8baheexickldl372okoa2g053dzc4wldg2ob9c65hm4cshoo2if7vowoa8dn0oxl4mmawzk4s1wznrwncj29nngrvhu9b5bgqzgcsze4v34q8dujqcz73i6smucxeilhdgausxx9qdmfm22d5we327fti52meo0qi75lcphftvcekemqw0h1u6jja6yrpyuywbn9skaef7iro3blpfuyt2tfw43gcwd0y4o6dahw0qwkbp1ct1e0mb9vc05bfdlsmfohmqb2tpmwgp1rjs0ae8k3dwvlfd61p7o9bgnniq6jdybioxw4judrtaukxkc6factq8gp20ryl7jhcsqj7j61n9it80aelxbp6jckl0k47lzarl9 == \3\7\b\y\k\t\n\i\k\t\3\1\c\5\b\j\e\4\9\g\q\l\5\u\o\d\m\i\l\7\4\k\l\k\3\h\4\w\h\9\j\d\e\0\3\b\6\d\3\y\l\t\v\h\3\u\q\j\v\2\4\z\9\h\1\2\i\i\j\2\t\i\d\t\e\4\s\y\d\g\t\j\0\y\y\z\e\l\b\1\t\9\i\p\y\c\5\1\p\1\s\g\8\x\9\9\6\w\t\t\h\a\f\v\3\7\d\7\x\3\y\1\y\t\m\m\1\l\0\0\k\s\g\s\9\3\8\i\5\7\r\8\b\a\h\e\e\x\i\c\k\l\d\l\3\7\2\o\k\o\a\2\g\0\5\3\d\z\c\4\w\l\d\g\2\o\b\9\c\6\5\h\m\4\c\s\h\o\o\2\i\f\7\v\o\w\o\a\8\d\n\0\o\x\l\4\m\m\a\w\z\k\4\s\1\w\z\n\r\w\n\c\j\2\9\n\n\g\r\v\h\u\9\b\5\b\g\q\z\g\c\s\z\e\4\v\3\4\q\8\d\u\j\q\c\z\7\3\i\6\s\m\u\c\x\e\i\l\h\d\g\a\u\s\x\x\9\q\d\m\f\m\2\2\d\5\w\e\3\2\7\f\t\i\5\2\m\e\o\0\q\i\7\5\l\c\p\h\f\t\v\c\e\k\e\m\q\w\0\h\1\u\6\j\j\a\6\y\r\p\y\u\y\w\b\n\9\s\k\a\e\f\7\i\r\o\3\b\l\p\f\u\y\t\2\t\f\w\4\3\g\c\w\d\0\y\4\o\6\d\a\h\w\0\q\w\k\b\p\1\c\t\1\e\0\m\b\9\v\c\0\5\b\f\d\l\s\m\f\o\h\m\q\b\2\t\p\m\w\g\p\1\r\j\s\0\a\e\8\k\3\d\w\v\l\f\d\6\1\p\7\o\9\b\g\n\n\i\q\6\j\d\y\b\i\o\x\w\4\j\u\d\r\t\a\u\k\x\k\c\6\f\a\c\t\q\8\g\p\2\0\r\y\l\7\j\h\c\s\q\j\7\j\6\1\n\9\i\t\8\0\a\e\l\x\b\p\6\j\c\k\l\0\k\4\7\l\z\a\r\l\9 ]] 00:07:47.386 21:15:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.386 21:15:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:47.386 [2024-11-28 21:15:11.005860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.387 [2024-11-28 21:15:11.005942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70025 ] 00:07:47.645 [2024-11-28 21:15:11.141560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.645 [2024-11-28 21:15:11.174588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.645  [2024-11-28T21:15:11.388Z] Copying: 512/512 [B] (average 500 kBps) 00:07:47.645 00:07:47.646 21:15:11 -- dd/posix.sh@93 -- # [[ 37byktnikt31c5bje49gql5uodmil74klk3h4wh9jde03b6d3yltvh3uqjv24z9h12iij2tidte4sydgtj0yyzelb1t9ipyc51p1sg8x996wtthafv37d7x3y1ytmm1l00ksgs938i57r8baheexickldl372okoa2g053dzc4wldg2ob9c65hm4cshoo2if7vowoa8dn0oxl4mmawzk4s1wznrwncj29nngrvhu9b5bgqzgcsze4v34q8dujqcz73i6smucxeilhdgausxx9qdmfm22d5we327fti52meo0qi75lcphftvcekemqw0h1u6jja6yrpyuywbn9skaef7iro3blpfuyt2tfw43gcwd0y4o6dahw0qwkbp1ct1e0mb9vc05bfdlsmfohmqb2tpmwgp1rjs0ae8k3dwvlfd61p7o9bgnniq6jdybioxw4judrtaukxkc6factq8gp20ryl7jhcsqj7j61n9it80aelxbp6jckl0k47lzarl9 == \3\7\b\y\k\t\n\i\k\t\3\1\c\5\b\j\e\4\9\g\q\l\5\u\o\d\m\i\l\7\4\k\l\k\3\h\4\w\h\9\j\d\e\0\3\b\6\d\3\y\l\t\v\h\3\u\q\j\v\2\4\z\9\h\1\2\i\i\j\2\t\i\d\t\e\4\s\y\d\g\t\j\0\y\y\z\e\l\b\1\t\9\i\p\y\c\5\1\p\1\s\g\8\x\9\9\6\w\t\t\h\a\f\v\3\7\d\7\x\3\y\1\y\t\m\m\1\l\0\0\k\s\g\s\9\3\8\i\5\7\r\8\b\a\h\e\e\x\i\c\k\l\d\l\3\7\2\o\k\o\a\2\g\0\5\3\d\z\c\4\w\l\d\g\2\o\b\9\c\6\5\h\m\4\c\s\h\o\o\2\i\f\7\v\o\w\o\a\8\d\n\0\o\x\l\4\m\m\a\w\z\k\4\s\1\w\z\n\r\w\n\c\j\2\9\n\n\g\r\v\h\u\9\b\5\b\g\q\z\g\c\s\z\e\4\v\3\4\q\8\d\u\j\q\c\z\7\3\i\6\s\m\u\c\x\e\i\l\h\d\g\a\u\s\x\x\9\q\d\m\f\m\2\2\d\5\w\e\3\2\7\f\t\i\5\2\m\e\o\0\q\i\7\5\l\c\p\h\f\t\v\c\e\k\e\m\q\w\0\h\1\u\6\j\j\a\6\y\r\p\y\u\y\w\b\n\9\s\k\a\e\f\7\i\r\o\3\b\l\p\f\u\y\t\2\t\f\w\4\3\g\c\w\d\0\y\4\o\6\d\a\h\w\0\q\w\k\b\p\1\c\t\1\e\0\m\b\9\v\c\0\5\b\f\d\l\s\m\f\o\h\m\q\b\2\t\p\m\w\g\p\1\r\j\s\0\a\e\8\k\3\d\w\v\l\f\d\6\1\p\7\o\9\b\g\n\n\i\q\6\j\d\y\b\i\o\x\w\4\j\u\d\r\t\a\u\k\x\k\c\6\f\a\c\t\q\8\g\p\2\0\r\y\l\7\j\h\c\s\q\j\7\j\6\1\n\9\i\t\8\0\a\e\l\x\b\p\6\j\c\k\l\0\k\4\7\l\z\a\r\l\9 ]] 00:07:47.646 21:15:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.646 21:15:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:47.905 [2024-11-28 21:15:11.396913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.905 [2024-11-28 21:15:11.397156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70038 ] 00:07:47.905 [2024-11-28 21:15:11.526248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.905 [2024-11-28 21:15:11.556105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.905  [2024-11-28T21:15:11.907Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.164 00:07:48.164 21:15:11 -- dd/posix.sh@93 -- # [[ 37byktnikt31c5bje49gql5uodmil74klk3h4wh9jde03b6d3yltvh3uqjv24z9h12iij2tidte4sydgtj0yyzelb1t9ipyc51p1sg8x996wtthafv37d7x3y1ytmm1l00ksgs938i57r8baheexickldl372okoa2g053dzc4wldg2ob9c65hm4cshoo2if7vowoa8dn0oxl4mmawzk4s1wznrwncj29nngrvhu9b5bgqzgcsze4v34q8dujqcz73i6smucxeilhdgausxx9qdmfm22d5we327fti52meo0qi75lcphftvcekemqw0h1u6jja6yrpyuywbn9skaef7iro3blpfuyt2tfw43gcwd0y4o6dahw0qwkbp1ct1e0mb9vc05bfdlsmfohmqb2tpmwgp1rjs0ae8k3dwvlfd61p7o9bgnniq6jdybioxw4judrtaukxkc6factq8gp20ryl7jhcsqj7j61n9it80aelxbp6jckl0k47lzarl9 == \3\7\b\y\k\t\n\i\k\t\3\1\c\5\b\j\e\4\9\g\q\l\5\u\o\d\m\i\l\7\4\k\l\k\3\h\4\w\h\9\j\d\e\0\3\b\6\d\3\y\l\t\v\h\3\u\q\j\v\2\4\z\9\h\1\2\i\i\j\2\t\i\d\t\e\4\s\y\d\g\t\j\0\y\y\z\e\l\b\1\t\9\i\p\y\c\5\1\p\1\s\g\8\x\9\9\6\w\t\t\h\a\f\v\3\7\d\7\x\3\y\1\y\t\m\m\1\l\0\0\k\s\g\s\9\3\8\i\5\7\r\8\b\a\h\e\e\x\i\c\k\l\d\l\3\7\2\o\k\o\a\2\g\0\5\3\d\z\c\4\w\l\d\g\2\o\b\9\c\6\5\h\m\4\c\s\h\o\o\2\i\f\7\v\o\w\o\a\8\d\n\0\o\x\l\4\m\m\a\w\z\k\4\s\1\w\z\n\r\w\n\c\j\2\9\n\n\g\r\v\h\u\9\b\5\b\g\q\z\g\c\s\z\e\4\v\3\4\q\8\d\u\j\q\c\z\7\3\i\6\s\m\u\c\x\e\i\l\h\d\g\a\u\s\x\x\9\q\d\m\f\m\2\2\d\5\w\e\3\2\7\f\t\i\5\2\m\e\o\0\q\i\7\5\l\c\p\h\f\t\v\c\e\k\e\m\q\w\0\h\1\u\6\j\j\a\6\y\r\p\y\u\y\w\b\n\9\s\k\a\e\f\7\i\r\o\3\b\l\p\f\u\y\t\2\t\f\w\4\3\g\c\w\d\0\y\4\o\6\d\a\h\w\0\q\w\k\b\p\1\c\t\1\e\0\m\b\9\v\c\0\5\b\f\d\l\s\m\f\o\h\m\q\b\2\t\p\m\w\g\p\1\r\j\s\0\a\e\8\k\3\d\w\v\l\f\d\6\1\p\7\o\9\b\g\n\n\i\q\6\j\d\y\b\i\o\x\w\4\j\u\d\r\t\a\u\k\x\k\c\6\f\a\c\t\q\8\g\p\2\0\r\y\l\7\j\h\c\s\q\j\7\j\6\1\n\9\i\t\8\0\a\e\l\x\b\p\6\j\c\k\l\0\k\4\7\l\z\a\r\l\9 ]] 00:07:48.164 00:07:48.164 real 0m3.010s 00:07:48.164 user 0m1.423s 00:07:48.164 sys 0m0.620s 00:07:48.164 21:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.164 ************************************ 00:07:48.164 END TEST dd_flags_misc 00:07:48.164 ************************************ 00:07:48.164 21:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.164 21:15:11 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:48.164 21:15:11 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:48.164 * Second test run, disabling liburing, forcing AIO 00:07:48.164 21:15:11 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:48.164 21:15:11 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:48.164 21:15:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.164 21:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.164 21:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.164 ************************************ 00:07:48.164 START TEST dd_flag_append_forced_aio 00:07:48.164 ************************************ 00:07:48.164 21:15:11 -- common/autotest_common.sh@1114 -- # append 00:07:48.164 21:15:11 -- dd/posix.sh@16 -- # local dump0 00:07:48.164 21:15:11 -- dd/posix.sh@17 -- # local dump1 00:07:48.164 21:15:11 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:48.164 21:15:11 -- dd/common.sh@98 -- # xtrace_disable 00:07:48.164 21:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.164 21:15:11 -- dd/posix.sh@19 -- # dump0=94inmuyl65bg0sma13falkrl5rjtmfzb 00:07:48.164 21:15:11 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:48.164 21:15:11 -- dd/common.sh@98 -- # xtrace_disable 00:07:48.164 21:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.164 21:15:11 -- dd/posix.sh@20 -- # dump1=cig83up5477fxqy4blmxnlxa0oq9czy6 00:07:48.164 21:15:11 -- dd/posix.sh@22 -- # printf %s 94inmuyl65bg0sma13falkrl5rjtmfzb 00:07:48.164 21:15:11 -- dd/posix.sh@23 -- # printf %s cig83up5477fxqy4blmxnlxa0oq9czy6 00:07:48.164 21:15:11 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:48.164 [2024-11-28 21:15:11.838221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.164 [2024-11-28 21:15:11.838314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70059 ] 00:07:48.423 [2024-11-28 21:15:11.974060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.423 [2024-11-28 21:15:12.003450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.423  [2024-11-28T21:15:12.425Z] Copying: 32/32 [B] (average 31 kBps) 00:07:48.682 00:07:48.682 21:15:12 -- dd/posix.sh@27 -- # [[ cig83up5477fxqy4blmxnlxa0oq9czy694inmuyl65bg0sma13falkrl5rjtmfzb == \c\i\g\8\3\u\p\5\4\7\7\f\x\q\y\4\b\l\m\x\n\l\x\a\0\o\q\9\c\z\y\6\9\4\i\n\m\u\y\l\6\5\b\g\0\s\m\a\1\3\f\a\l\k\r\l\5\r\j\t\m\f\z\b ]] 00:07:48.682 00:07:48.682 real 0m0.408s 00:07:48.682 user 0m0.194s 00:07:48.682 sys 0m0.094s 00:07:48.682 21:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.682 ************************************ 00:07:48.682 END TEST dd_flag_append_forced_aio 00:07:48.682 ************************************ 00:07:48.682 21:15:12 -- common/autotest_common.sh@10 -- # set +x 00:07:48.682 21:15:12 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:48.682 21:15:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.682 21:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.682 21:15:12 -- common/autotest_common.sh@10 -- # set +x 00:07:48.682 ************************************ 00:07:48.682 START TEST dd_flag_directory_forced_aio 00:07:48.682 ************************************ 00:07:48.682 21:15:12 -- common/autotest_common.sh@1114 -- # directory 00:07:48.682 21:15:12 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.682 21:15:12 -- common/autotest_common.sh@650 -- # local es=0 00:07:48.682 21:15:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.682 21:15:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.682 21:15:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.682 21:15:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.682 21:15:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.682 21:15:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.682 21:15:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.682 21:15:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.682 21:15:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.682 21:15:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.682 [2024-11-28 21:15:12.288644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.682 [2024-11-28 21:15:12.288895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70091 ] 00:07:48.941 [2024-11-28 21:15:12.428484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.941 [2024-11-28 21:15:12.458351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.941 [2024-11-28 21:15:12.498063] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.941 [2024-11-28 21:15:12.498116] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.941 [2024-11-28 21:15:12.498143] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.941 [2024-11-28 21:15:12.551812] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:48.941 21:15:12 -- common/autotest_common.sh@653 -- # es=236 00:07:48.941 21:15:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.941 21:15:12 -- common/autotest_common.sh@662 -- # es=108 00:07:48.941 21:15:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.941 21:15:12 -- common/autotest_common.sh@670 -- # es=1 00:07:48.941 21:15:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.941 21:15:12 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.941 21:15:12 -- common/autotest_common.sh@650 -- # local es=0 00:07:48.941 21:15:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.941 21:15:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.941 21:15:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.941 21:15:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.941 21:15:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.941 21:15:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.941 21:15:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.941 21:15:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.941 21:15:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.941 21:15:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.941 [2024-11-28 21:15:12.659670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.941 [2024-11-28 21:15:12.659929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:07:49.200 [2024-11-28 21:15:12.794475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.200 [2024-11-28 21:15:12.824426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.200 [2024-11-28 21:15:12.864186] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:49.200 [2024-11-28 21:15:12.864238] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:49.200 [2024-11-28 21:15:12.864267] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.200 [2024-11-28 21:15:12.918517] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:49.459 21:15:12 -- common/autotest_common.sh@653 -- # es=236 00:07:49.459 21:15:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.459 21:15:12 -- common/autotest_common.sh@662 -- # es=108 00:07:49.459 21:15:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.459 21:15:12 -- common/autotest_common.sh@670 -- # es=1 00:07:49.460 ************************************ 00:07:49.460 END TEST dd_flag_directory_forced_aio 00:07:49.460 ************************************ 00:07:49.460 21:15:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.460 00:07:49.460 real 0m0.757s 00:07:49.460 user 0m0.377s 00:07:49.460 sys 0m0.172s 00:07:49.460 21:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.460 21:15:12 -- common/autotest_common.sh@10 -- # set +x 00:07:49.460 21:15:13 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:49.460 21:15:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.460 21:15:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.460 21:15:13 -- common/autotest_common.sh@10 -- # set +x 00:07:49.460 ************************************ 00:07:49.460 START TEST dd_flag_nofollow_forced_aio 00:07:49.460 ************************************ 00:07:49.460 21:15:13 -- common/autotest_common.sh@1114 -- # nofollow 00:07:49.460 21:15:13 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:49.460 21:15:13 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:49.460 21:15:13 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:49.460 21:15:13 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:49.460 21:15:13 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.460 21:15:13 -- common/autotest_common.sh@650 -- # local es=0 00:07:49.460 21:15:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.460 21:15:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.460 21:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.460 21:15:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.460 21:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.460 21:15:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.460 21:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.460 21:15:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.460 21:15:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.460 21:15:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.460 [2024-11-28 21:15:13.103312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.460 [2024-11-28 21:15:13.103405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70124 ] 00:07:49.718 [2024-11-28 21:15:13.238133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.718 [2024-11-28 21:15:13.271469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.718 [2024-11-28 21:15:13.312495] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:49.718 [2024-11-28 21:15:13.312768] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:49.718 [2024-11-28 21:15:13.312980] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.718 [2024-11-28 21:15:13.370962] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:49.718 21:15:13 -- common/autotest_common.sh@653 -- # es=216 00:07:49.718 21:15:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.718 21:15:13 -- common/autotest_common.sh@662 -- # es=88 00:07:49.718 21:15:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.718 21:15:13 -- common/autotest_common.sh@670 -- # es=1 00:07:49.718 21:15:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.718 21:15:13 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.718 21:15:13 -- common/autotest_common.sh@650 -- # local es=0 00:07:49.718 21:15:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.718 21:15:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.719 21:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.719 21:15:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.719 21:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.719 21:15:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.719 21:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.719 21:15:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.719 21:15:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.719 21:15:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.977 [2024-11-28 21:15:13.490290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.977 [2024-11-28 21:15:13.490383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:07:49.977 [2024-11-28 21:15:13.626809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.977 [2024-11-28 21:15:13.659403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.977 [2024-11-28 21:15:13.700163] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:49.978 [2024-11-28 21:15:13.700248] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:49.978 [2024-11-28 21:15:13.700315] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.236 [2024-11-28 21:15:13.758829] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:50.236 21:15:13 -- common/autotest_common.sh@653 -- # es=216 00:07:50.236 21:15:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.236 21:15:13 -- common/autotest_common.sh@662 -- # es=88 00:07:50.236 21:15:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.236 21:15:13 -- common/autotest_common.sh@670 -- # es=1 00:07:50.236 21:15:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.236 21:15:13 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:50.236 21:15:13 -- dd/common.sh@98 -- # xtrace_disable 00:07:50.236 21:15:13 -- common/autotest_common.sh@10 -- # set +x 00:07:50.236 21:15:13 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.236 [2024-11-28 21:15:13.884922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.236 [2024-11-28 21:15:13.885271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70135 ] 00:07:50.494 [2024-11-28 21:15:14.018396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.494 [2024-11-28 21:15:14.051483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.494  [2024-11-28T21:15:14.237Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.494 00:07:50.753 21:15:14 -- dd/posix.sh@49 -- # [[ ol87pdfb3v1qbon58gk0irb3tpiaq17gwr488i3knh4hkj3n0akxbvrvfwruls5va3hnpr4818ypimj0nrpqkb39jkwf8mhg49lkea0k3dwvpneclcsfwiwst9xs4ortanu6ay86dz5taogiwt5d63l7v21jz1qpleblmr77ylr29b2mnopsk913abumxo5vr1nzvdanto6iiszpjrbqpzkmqic5bu35jon5m3fxqnx687gpqazjuux6nlti6tbdtmlw61h6wbd63s0ynk1m5jc1occ12vh9s0r7rnvancftxtphexug57qf510u8ti9voc2qd1mt76j01q5q0uyhj0l4o3esrxgrhmsrsau4vt4kzeujn1adbe6mopi0cba6ld4vrkr7qbuidsrmgiq948p0opki6u0gmmksdlsai3bg0ege7vkxzqq11a0098p999etd6av3qukwbfunbhq3n1njjlo8yk4i9sh1jbukyn6gbsesnj2ohj17g933xy == \o\l\8\7\p\d\f\b\3\v\1\q\b\o\n\5\8\g\k\0\i\r\b\3\t\p\i\a\q\1\7\g\w\r\4\8\8\i\3\k\n\h\4\h\k\j\3\n\0\a\k\x\b\v\r\v\f\w\r\u\l\s\5\v\a\3\h\n\p\r\4\8\1\8\y\p\i\m\j\0\n\r\p\q\k\b\3\9\j\k\w\f\8\m\h\g\4\9\l\k\e\a\0\k\3\d\w\v\p\n\e\c\l\c\s\f\w\i\w\s\t\9\x\s\4\o\r\t\a\n\u\6\a\y\8\6\d\z\5\t\a\o\g\i\w\t\5\d\6\3\l\7\v\2\1\j\z\1\q\p\l\e\b\l\m\r\7\7\y\l\r\2\9\b\2\m\n\o\p\s\k\9\1\3\a\b\u\m\x\o\5\v\r\1\n\z\v\d\a\n\t\o\6\i\i\s\z\p\j\r\b\q\p\z\k\m\q\i\c\5\b\u\3\5\j\o\n\5\m\3\f\x\q\n\x\6\8\7\g\p\q\a\z\j\u\u\x\6\n\l\t\i\6\t\b\d\t\m\l\w\6\1\h\6\w\b\d\6\3\s\0\y\n\k\1\m\5\j\c\1\o\c\c\1\2\v\h\9\s\0\r\7\r\n\v\a\n\c\f\t\x\t\p\h\e\x\u\g\5\7\q\f\5\1\0\u\8\t\i\9\v\o\c\2\q\d\1\m\t\7\6\j\0\1\q\5\q\0\u\y\h\j\0\l\4\o\3\e\s\r\x\g\r\h\m\s\r\s\a\u\4\v\t\4\k\z\e\u\j\n\1\a\d\b\e\6\m\o\p\i\0\c\b\a\6\l\d\4\v\r\k\r\7\q\b\u\i\d\s\r\m\g\i\q\9\4\8\p\0\o\p\k\i\6\u\0\g\m\m\k\s\d\l\s\a\i\3\b\g\0\e\g\e\7\v\k\x\z\q\q\1\1\a\0\0\9\8\p\9\9\9\e\t\d\6\a\v\3\q\u\k\w\b\f\u\n\b\h\q\3\n\1\n\j\j\l\o\8\y\k\4\i\9\s\h\1\j\b\u\k\y\n\6\g\b\s\e\s\n\j\2\o\h\j\1\7\g\9\3\3\x\y ]] 00:07:50.753 00:07:50.753 real 0m1.190s 00:07:50.753 user 0m0.596s 00:07:50.753 sys 0m0.265s 00:07:50.753 ************************************ 00:07:50.754 END TEST dd_flag_nofollow_forced_aio 00:07:50.754 ************************************ 00:07:50.754 21:15:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.754 21:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:50.754 21:15:14 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:50.754 21:15:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.754 21:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.754 21:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:50.754 ************************************ 00:07:50.754 START TEST dd_flag_noatime_forced_aio 00:07:50.754 ************************************ 00:07:50.754 21:15:14 -- common/autotest_common.sh@1114 -- # noatime 00:07:50.754 21:15:14 -- dd/posix.sh@53 -- # local atime_if 00:07:50.754 21:15:14 -- dd/posix.sh@54 -- # local atime_of 00:07:50.754 21:15:14 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:50.754 21:15:14 -- dd/common.sh@98 -- # xtrace_disable 00:07:50.754 21:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:50.754 21:15:14 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.754 21:15:14 -- dd/posix.sh@60 -- # atime_if=1732828514 00:07:50.754 21:15:14 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.754 21:15:14 -- dd/posix.sh@61 -- # atime_of=1732828514 00:07:50.754 21:15:14 -- dd/posix.sh@66 -- # sleep 1 00:07:51.690 21:15:15 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.690 [2024-11-28 21:15:15.369751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.690 [2024-11-28 21:15:15.369854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70176 ] 00:07:51.950 [2024-11-28 21:15:15.511626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.950 [2024-11-28 21:15:15.551660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.950  [2024-11-28T21:15:15.952Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.209 00:07:52.209 21:15:15 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.209 21:15:15 -- dd/posix.sh@69 -- # (( atime_if == 1732828514 )) 00:07:52.209 21:15:15 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.209 21:15:15 -- dd/posix.sh@70 -- # (( atime_of == 1732828514 )) 00:07:52.209 21:15:15 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.209 [2024-11-28 21:15:15.815061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.209 [2024-11-28 21:15:15.815178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70187 ] 00:07:52.468 [2024-11-28 21:15:15.953677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.468 [2024-11-28 21:15:15.987361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.468  [2024-11-28T21:15:16.211Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.468 00:07:52.468 21:15:16 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.468 21:15:16 -- dd/posix.sh@73 -- # (( atime_if < 1732828516 )) 00:07:52.468 00:07:52.468 real 0m1.882s 00:07:52.468 user 0m0.433s 00:07:52.468 sys 0m0.203s 00:07:52.468 21:15:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.468 21:15:16 -- common/autotest_common.sh@10 -- # set +x 00:07:52.468 ************************************ 00:07:52.468 END TEST dd_flag_noatime_forced_aio 00:07:52.469 ************************************ 00:07:52.729 21:15:16 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:52.729 21:15:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:52.729 21:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.729 21:15:16 -- common/autotest_common.sh@10 -- # set +x 00:07:52.729 ************************************ 00:07:52.729 START TEST dd_flags_misc_forced_aio 00:07:52.729 ************************************ 00:07:52.729 21:15:16 -- common/autotest_common.sh@1114 -- # io 00:07:52.729 21:15:16 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:52.729 21:15:16 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:52.729 21:15:16 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:52.729 21:15:16 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:52.729 21:15:16 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:52.729 21:15:16 -- dd/common.sh@98 -- # xtrace_disable 00:07:52.729 21:15:16 -- common/autotest_common.sh@10 -- # set +x 00:07:52.729 21:15:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.729 21:15:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:52.729 [2024-11-28 21:15:16.299243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.729 [2024-11-28 21:15:16.299375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70214 ] 00:07:52.729 [2024-11-28 21:15:16.437492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.988 [2024-11-28 21:15:16.473452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.988  [2024-11-28T21:15:16.731Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.988 00:07:52.988 21:15:16 -- dd/posix.sh@93 -- # [[ li7t1ujzatiutqvcmc17rcyd7un0tnwkqt0h70h1h62yurb07g88ivw027qj5z1v83byrej0rhjet6c5zc9h7592x8tc092vtk90kcue31dmmkpgu4m05yowwvfgmlpaakhqah82dr5k4nocplsgb07qr4z59jillg055jesul5tubfsj66bf3ld6co1aed5mbc9t76vm83i6vh8sszebqhhqt8g37sr36otoqjc5s7clvuge7rfvh0roslfzk1xw7bao5xvp3yojcxlkzhcjjsaku7arktdzieods1zz2b5po11xa0strr8846b39mwppcu2y4krox06lh4dx3428l4j6icmkbs3judjonsbfgpwqhf65lsm8hwnfiie74rclqk2uvah4s0l4dltfhnq6zcwsy5demo3a59knminza31roryer28p1lx4fzw00a96wdrtof6k5i2koxkul5u3kmyomi4gc3vou7viqt6hg50c0xpr4896ki9hqz406n == \l\i\7\t\1\u\j\z\a\t\i\u\t\q\v\c\m\c\1\7\r\c\y\d\7\u\n\0\t\n\w\k\q\t\0\h\7\0\h\1\h\6\2\y\u\r\b\0\7\g\8\8\i\v\w\0\2\7\q\j\5\z\1\v\8\3\b\y\r\e\j\0\r\h\j\e\t\6\c\5\z\c\9\h\7\5\9\2\x\8\t\c\0\9\2\v\t\k\9\0\k\c\u\e\3\1\d\m\m\k\p\g\u\4\m\0\5\y\o\w\w\v\f\g\m\l\p\a\a\k\h\q\a\h\8\2\d\r\5\k\4\n\o\c\p\l\s\g\b\0\7\q\r\4\z\5\9\j\i\l\l\g\0\5\5\j\e\s\u\l\5\t\u\b\f\s\j\6\6\b\f\3\l\d\6\c\o\1\a\e\d\5\m\b\c\9\t\7\6\v\m\8\3\i\6\v\h\8\s\s\z\e\b\q\h\h\q\t\8\g\3\7\s\r\3\6\o\t\o\q\j\c\5\s\7\c\l\v\u\g\e\7\r\f\v\h\0\r\o\s\l\f\z\k\1\x\w\7\b\a\o\5\x\v\p\3\y\o\j\c\x\l\k\z\h\c\j\j\s\a\k\u\7\a\r\k\t\d\z\i\e\o\d\s\1\z\z\2\b\5\p\o\1\1\x\a\0\s\t\r\r\8\8\4\6\b\3\9\m\w\p\p\c\u\2\y\4\k\r\o\x\0\6\l\h\4\d\x\3\4\2\8\l\4\j\6\i\c\m\k\b\s\3\j\u\d\j\o\n\s\b\f\g\p\w\q\h\f\6\5\l\s\m\8\h\w\n\f\i\i\e\7\4\r\c\l\q\k\2\u\v\a\h\4\s\0\l\4\d\l\t\f\h\n\q\6\z\c\w\s\y\5\d\e\m\o\3\a\5\9\k\n\m\i\n\z\a\3\1\r\o\r\y\e\r\2\8\p\1\l\x\4\f\z\w\0\0\a\9\6\w\d\r\t\o\f\6\k\5\i\2\k\o\x\k\u\l\5\u\3\k\m\y\o\m\i\4\g\c\3\v\o\u\7\v\i\q\t\6\h\g\5\0\c\0\x\p\r\4\8\9\6\k\i\9\h\q\z\4\0\6\n ]] 00:07:52.988 21:15:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.988 21:15:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:52.988 [2024-11-28 21:15:16.711705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.988 [2024-11-28 21:15:16.711804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70221 ] 00:07:53.247 [2024-11-28 21:15:16.848510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.247 [2024-11-28 21:15:16.880720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.247  [2024-11-28T21:15:17.249Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.506 00:07:53.506 21:15:17 -- dd/posix.sh@93 -- # [[ li7t1ujzatiutqvcmc17rcyd7un0tnwkqt0h70h1h62yurb07g88ivw027qj5z1v83byrej0rhjet6c5zc9h7592x8tc092vtk90kcue31dmmkpgu4m05yowwvfgmlpaakhqah82dr5k4nocplsgb07qr4z59jillg055jesul5tubfsj66bf3ld6co1aed5mbc9t76vm83i6vh8sszebqhhqt8g37sr36otoqjc5s7clvuge7rfvh0roslfzk1xw7bao5xvp3yojcxlkzhcjjsaku7arktdzieods1zz2b5po11xa0strr8846b39mwppcu2y4krox06lh4dx3428l4j6icmkbs3judjonsbfgpwqhf65lsm8hwnfiie74rclqk2uvah4s0l4dltfhnq6zcwsy5demo3a59knminza31roryer28p1lx4fzw00a96wdrtof6k5i2koxkul5u3kmyomi4gc3vou7viqt6hg50c0xpr4896ki9hqz406n == \l\i\7\t\1\u\j\z\a\t\i\u\t\q\v\c\m\c\1\7\r\c\y\d\7\u\n\0\t\n\w\k\q\t\0\h\7\0\h\1\h\6\2\y\u\r\b\0\7\g\8\8\i\v\w\0\2\7\q\j\5\z\1\v\8\3\b\y\r\e\j\0\r\h\j\e\t\6\c\5\z\c\9\h\7\5\9\2\x\8\t\c\0\9\2\v\t\k\9\0\k\c\u\e\3\1\d\m\m\k\p\g\u\4\m\0\5\y\o\w\w\v\f\g\m\l\p\a\a\k\h\q\a\h\8\2\d\r\5\k\4\n\o\c\p\l\s\g\b\0\7\q\r\4\z\5\9\j\i\l\l\g\0\5\5\j\e\s\u\l\5\t\u\b\f\s\j\6\6\b\f\3\l\d\6\c\o\1\a\e\d\5\m\b\c\9\t\7\6\v\m\8\3\i\6\v\h\8\s\s\z\e\b\q\h\h\q\t\8\g\3\7\s\r\3\6\o\t\o\q\j\c\5\s\7\c\l\v\u\g\e\7\r\f\v\h\0\r\o\s\l\f\z\k\1\x\w\7\b\a\o\5\x\v\p\3\y\o\j\c\x\l\k\z\h\c\j\j\s\a\k\u\7\a\r\k\t\d\z\i\e\o\d\s\1\z\z\2\b\5\p\o\1\1\x\a\0\s\t\r\r\8\8\4\6\b\3\9\m\w\p\p\c\u\2\y\4\k\r\o\x\0\6\l\h\4\d\x\3\4\2\8\l\4\j\6\i\c\m\k\b\s\3\j\u\d\j\o\n\s\b\f\g\p\w\q\h\f\6\5\l\s\m\8\h\w\n\f\i\i\e\7\4\r\c\l\q\k\2\u\v\a\h\4\s\0\l\4\d\l\t\f\h\n\q\6\z\c\w\s\y\5\d\e\m\o\3\a\5\9\k\n\m\i\n\z\a\3\1\r\o\r\y\e\r\2\8\p\1\l\x\4\f\z\w\0\0\a\9\6\w\d\r\t\o\f\6\k\5\i\2\k\o\x\k\u\l\5\u\3\k\m\y\o\m\i\4\g\c\3\v\o\u\7\v\i\q\t\6\h\g\5\0\c\0\x\p\r\4\8\9\6\k\i\9\h\q\z\4\0\6\n ]] 00:07:53.506 21:15:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.506 21:15:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.506 [2024-11-28 21:15:17.115802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.506 [2024-11-28 21:15:17.115918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70223 ] 00:07:53.766 [2024-11-28 21:15:17.253792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.766 [2024-11-28 21:15:17.285807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.766  [2024-11-28T21:15:17.509Z] Copying: 512/512 [B] (average 166 kBps) 00:07:53.766 00:07:53.766 21:15:17 -- dd/posix.sh@93 -- # [[ li7t1ujzatiutqvcmc17rcyd7un0tnwkqt0h70h1h62yurb07g88ivw027qj5z1v83byrej0rhjet6c5zc9h7592x8tc092vtk90kcue31dmmkpgu4m05yowwvfgmlpaakhqah82dr5k4nocplsgb07qr4z59jillg055jesul5tubfsj66bf3ld6co1aed5mbc9t76vm83i6vh8sszebqhhqt8g37sr36otoqjc5s7clvuge7rfvh0roslfzk1xw7bao5xvp3yojcxlkzhcjjsaku7arktdzieods1zz2b5po11xa0strr8846b39mwppcu2y4krox06lh4dx3428l4j6icmkbs3judjonsbfgpwqhf65lsm8hwnfiie74rclqk2uvah4s0l4dltfhnq6zcwsy5demo3a59knminza31roryer28p1lx4fzw00a96wdrtof6k5i2koxkul5u3kmyomi4gc3vou7viqt6hg50c0xpr4896ki9hqz406n == \l\i\7\t\1\u\j\z\a\t\i\u\t\q\v\c\m\c\1\7\r\c\y\d\7\u\n\0\t\n\w\k\q\t\0\h\7\0\h\1\h\6\2\y\u\r\b\0\7\g\8\8\i\v\w\0\2\7\q\j\5\z\1\v\8\3\b\y\r\e\j\0\r\h\j\e\t\6\c\5\z\c\9\h\7\5\9\2\x\8\t\c\0\9\2\v\t\k\9\0\k\c\u\e\3\1\d\m\m\k\p\g\u\4\m\0\5\y\o\w\w\v\f\g\m\l\p\a\a\k\h\q\a\h\8\2\d\r\5\k\4\n\o\c\p\l\s\g\b\0\7\q\r\4\z\5\9\j\i\l\l\g\0\5\5\j\e\s\u\l\5\t\u\b\f\s\j\6\6\b\f\3\l\d\6\c\o\1\a\e\d\5\m\b\c\9\t\7\6\v\m\8\3\i\6\v\h\8\s\s\z\e\b\q\h\h\q\t\8\g\3\7\s\r\3\6\o\t\o\q\j\c\5\s\7\c\l\v\u\g\e\7\r\f\v\h\0\r\o\s\l\f\z\k\1\x\w\7\b\a\o\5\x\v\p\3\y\o\j\c\x\l\k\z\h\c\j\j\s\a\k\u\7\a\r\k\t\d\z\i\e\o\d\s\1\z\z\2\b\5\p\o\1\1\x\a\0\s\t\r\r\8\8\4\6\b\3\9\m\w\p\p\c\u\2\y\4\k\r\o\x\0\6\l\h\4\d\x\3\4\2\8\l\4\j\6\i\c\m\k\b\s\3\j\u\d\j\o\n\s\b\f\g\p\w\q\h\f\6\5\l\s\m\8\h\w\n\f\i\i\e\7\4\r\c\l\q\k\2\u\v\a\h\4\s\0\l\4\d\l\t\f\h\n\q\6\z\c\w\s\y\5\d\e\m\o\3\a\5\9\k\n\m\i\n\z\a\3\1\r\o\r\y\e\r\2\8\p\1\l\x\4\f\z\w\0\0\a\9\6\w\d\r\t\o\f\6\k\5\i\2\k\o\x\k\u\l\5\u\3\k\m\y\o\m\i\4\g\c\3\v\o\u\7\v\i\q\t\6\h\g\5\0\c\0\x\p\r\4\8\9\6\k\i\9\h\q\z\4\0\6\n ]] 00:07:53.766 21:15:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.766 21:15:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:54.026 [2024-11-28 21:15:17.526252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.026 [2024-11-28 21:15:17.526377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:07:54.027 [2024-11-28 21:15:17.672135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.027 [2024-11-28 21:15:17.702077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.027  [2024-11-28T21:15:18.030Z] Copying: 512/512 [B] (average 166 kBps) 00:07:54.287 00:07:54.287 21:15:17 -- dd/posix.sh@93 -- # [[ li7t1ujzatiutqvcmc17rcyd7un0tnwkqt0h70h1h62yurb07g88ivw027qj5z1v83byrej0rhjet6c5zc9h7592x8tc092vtk90kcue31dmmkpgu4m05yowwvfgmlpaakhqah82dr5k4nocplsgb07qr4z59jillg055jesul5tubfsj66bf3ld6co1aed5mbc9t76vm83i6vh8sszebqhhqt8g37sr36otoqjc5s7clvuge7rfvh0roslfzk1xw7bao5xvp3yojcxlkzhcjjsaku7arktdzieods1zz2b5po11xa0strr8846b39mwppcu2y4krox06lh4dx3428l4j6icmkbs3judjonsbfgpwqhf65lsm8hwnfiie74rclqk2uvah4s0l4dltfhnq6zcwsy5demo3a59knminza31roryer28p1lx4fzw00a96wdrtof6k5i2koxkul5u3kmyomi4gc3vou7viqt6hg50c0xpr4896ki9hqz406n == \l\i\7\t\1\u\j\z\a\t\i\u\t\q\v\c\m\c\1\7\r\c\y\d\7\u\n\0\t\n\w\k\q\t\0\h\7\0\h\1\h\6\2\y\u\r\b\0\7\g\8\8\i\v\w\0\2\7\q\j\5\z\1\v\8\3\b\y\r\e\j\0\r\h\j\e\t\6\c\5\z\c\9\h\7\5\9\2\x\8\t\c\0\9\2\v\t\k\9\0\k\c\u\e\3\1\d\m\m\k\p\g\u\4\m\0\5\y\o\w\w\v\f\g\m\l\p\a\a\k\h\q\a\h\8\2\d\r\5\k\4\n\o\c\p\l\s\g\b\0\7\q\r\4\z\5\9\j\i\l\l\g\0\5\5\j\e\s\u\l\5\t\u\b\f\s\j\6\6\b\f\3\l\d\6\c\o\1\a\e\d\5\m\b\c\9\t\7\6\v\m\8\3\i\6\v\h\8\s\s\z\e\b\q\h\h\q\t\8\g\3\7\s\r\3\6\o\t\o\q\j\c\5\s\7\c\l\v\u\g\e\7\r\f\v\h\0\r\o\s\l\f\z\k\1\x\w\7\b\a\o\5\x\v\p\3\y\o\j\c\x\l\k\z\h\c\j\j\s\a\k\u\7\a\r\k\t\d\z\i\e\o\d\s\1\z\z\2\b\5\p\o\1\1\x\a\0\s\t\r\r\8\8\4\6\b\3\9\m\w\p\p\c\u\2\y\4\k\r\o\x\0\6\l\h\4\d\x\3\4\2\8\l\4\j\6\i\c\m\k\b\s\3\j\u\d\j\o\n\s\b\f\g\p\w\q\h\f\6\5\l\s\m\8\h\w\n\f\i\i\e\7\4\r\c\l\q\k\2\u\v\a\h\4\s\0\l\4\d\l\t\f\h\n\q\6\z\c\w\s\y\5\d\e\m\o\3\a\5\9\k\n\m\i\n\z\a\3\1\r\o\r\y\e\r\2\8\p\1\l\x\4\f\z\w\0\0\a\9\6\w\d\r\t\o\f\6\k\5\i\2\k\o\x\k\u\l\5\u\3\k\m\y\o\m\i\4\g\c\3\v\o\u\7\v\i\q\t\6\h\g\5\0\c\0\x\p\r\4\8\9\6\k\i\9\h\q\z\4\0\6\n ]] 00:07:54.287 21:15:17 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.287 21:15:17 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.287 21:15:17 -- dd/common.sh@98 -- # xtrace_disable 00:07:54.287 21:15:17 -- common/autotest_common.sh@10 -- # set +x 00:07:54.287 21:15:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.287 21:15:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.287 [2024-11-28 21:15:17.949417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.287 [2024-11-28 21:15:17.950105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70238 ] 00:07:54.548 [2024-11-28 21:15:18.087704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.548 [2024-11-28 21:15:18.120472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.548  [2024-11-28T21:15:18.549Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.806 00:07:54.807 21:15:18 -- dd/posix.sh@93 -- # [[ seur8vjvfv9ng463kyasfu98htsemq0f14ngbqzgrchhpjftub0r7h63r4ko44yjf4uyz5ssvfz0qh9rgy606cdhda8og48gsnaam1cnlnsmb6fnb98pj8bxgx8545pveebm1ncnzxsorfuvudzv5s8zr1642yzm8p9lku25sbdd6nis99j0ds04gbcis5rhxxm51itgp6mnw02yfplwhcm9g09doj7i4d4fwb0vqubdb89mqkdi5kl3xqffnm20zzn5vyp9k2f1r8esrei48l51cgjp6pwfvjgclvp391vf9gsti4anlf5oldkmdl536zejjxd9zazrbtx203k163egeb5vsmg1j5utdvs36ym6rf4q95uv1fpqvded0kzsxkngw8wn86kwukt0bifm8hg52t6vcd9lhgyaoll1xsvpmfejzqukhe9gtlckdhjtlpuyh87gj6mmdxtt5h0vtgq7wxtexgenmy5p2ztpp00nd5q33nclygdhz65ya5gt == \s\e\u\r\8\v\j\v\f\v\9\n\g\4\6\3\k\y\a\s\f\u\9\8\h\t\s\e\m\q\0\f\1\4\n\g\b\q\z\g\r\c\h\h\p\j\f\t\u\b\0\r\7\h\6\3\r\4\k\o\4\4\y\j\f\4\u\y\z\5\s\s\v\f\z\0\q\h\9\r\g\y\6\0\6\c\d\h\d\a\8\o\g\4\8\g\s\n\a\a\m\1\c\n\l\n\s\m\b\6\f\n\b\9\8\p\j\8\b\x\g\x\8\5\4\5\p\v\e\e\b\m\1\n\c\n\z\x\s\o\r\f\u\v\u\d\z\v\5\s\8\z\r\1\6\4\2\y\z\m\8\p\9\l\k\u\2\5\s\b\d\d\6\n\i\s\9\9\j\0\d\s\0\4\g\b\c\i\s\5\r\h\x\x\m\5\1\i\t\g\p\6\m\n\w\0\2\y\f\p\l\w\h\c\m\9\g\0\9\d\o\j\7\i\4\d\4\f\w\b\0\v\q\u\b\d\b\8\9\m\q\k\d\i\5\k\l\3\x\q\f\f\n\m\2\0\z\z\n\5\v\y\p\9\k\2\f\1\r\8\e\s\r\e\i\4\8\l\5\1\c\g\j\p\6\p\w\f\v\j\g\c\l\v\p\3\9\1\v\f\9\g\s\t\i\4\a\n\l\f\5\o\l\d\k\m\d\l\5\3\6\z\e\j\j\x\d\9\z\a\z\r\b\t\x\2\0\3\k\1\6\3\e\g\e\b\5\v\s\m\g\1\j\5\u\t\d\v\s\3\6\y\m\6\r\f\4\q\9\5\u\v\1\f\p\q\v\d\e\d\0\k\z\s\x\k\n\g\w\8\w\n\8\6\k\w\u\k\t\0\b\i\f\m\8\h\g\5\2\t\6\v\c\d\9\l\h\g\y\a\o\l\l\1\x\s\v\p\m\f\e\j\z\q\u\k\h\e\9\g\t\l\c\k\d\h\j\t\l\p\u\y\h\8\7\g\j\6\m\m\d\x\t\t\5\h\0\v\t\g\q\7\w\x\t\e\x\g\e\n\m\y\5\p\2\z\t\p\p\0\0\n\d\5\q\3\3\n\c\l\y\g\d\h\z\6\5\y\a\5\g\t ]] 00:07:54.807 21:15:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.807 21:15:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:54.807 [2024-11-28 21:15:18.357827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.807 [2024-11-28 21:15:18.357944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70246 ] 00:07:54.807 [2024-11-28 21:15:18.491338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.807 [2024-11-28 21:15:18.528609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.066  [2024-11-28T21:15:18.809Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.066 00:07:55.066 21:15:18 -- dd/posix.sh@93 -- # [[ seur8vjvfv9ng463kyasfu98htsemq0f14ngbqzgrchhpjftub0r7h63r4ko44yjf4uyz5ssvfz0qh9rgy606cdhda8og48gsnaam1cnlnsmb6fnb98pj8bxgx8545pveebm1ncnzxsorfuvudzv5s8zr1642yzm8p9lku25sbdd6nis99j0ds04gbcis5rhxxm51itgp6mnw02yfplwhcm9g09doj7i4d4fwb0vqubdb89mqkdi5kl3xqffnm20zzn5vyp9k2f1r8esrei48l51cgjp6pwfvjgclvp391vf9gsti4anlf5oldkmdl536zejjxd9zazrbtx203k163egeb5vsmg1j5utdvs36ym6rf4q95uv1fpqvded0kzsxkngw8wn86kwukt0bifm8hg52t6vcd9lhgyaoll1xsvpmfejzqukhe9gtlckdhjtlpuyh87gj6mmdxtt5h0vtgq7wxtexgenmy5p2ztpp00nd5q33nclygdhz65ya5gt == \s\e\u\r\8\v\j\v\f\v\9\n\g\4\6\3\k\y\a\s\f\u\9\8\h\t\s\e\m\q\0\f\1\4\n\g\b\q\z\g\r\c\h\h\p\j\f\t\u\b\0\r\7\h\6\3\r\4\k\o\4\4\y\j\f\4\u\y\z\5\s\s\v\f\z\0\q\h\9\r\g\y\6\0\6\c\d\h\d\a\8\o\g\4\8\g\s\n\a\a\m\1\c\n\l\n\s\m\b\6\f\n\b\9\8\p\j\8\b\x\g\x\8\5\4\5\p\v\e\e\b\m\1\n\c\n\z\x\s\o\r\f\u\v\u\d\z\v\5\s\8\z\r\1\6\4\2\y\z\m\8\p\9\l\k\u\2\5\s\b\d\d\6\n\i\s\9\9\j\0\d\s\0\4\g\b\c\i\s\5\r\h\x\x\m\5\1\i\t\g\p\6\m\n\w\0\2\y\f\p\l\w\h\c\m\9\g\0\9\d\o\j\7\i\4\d\4\f\w\b\0\v\q\u\b\d\b\8\9\m\q\k\d\i\5\k\l\3\x\q\f\f\n\m\2\0\z\z\n\5\v\y\p\9\k\2\f\1\r\8\e\s\r\e\i\4\8\l\5\1\c\g\j\p\6\p\w\f\v\j\g\c\l\v\p\3\9\1\v\f\9\g\s\t\i\4\a\n\l\f\5\o\l\d\k\m\d\l\5\3\6\z\e\j\j\x\d\9\z\a\z\r\b\t\x\2\0\3\k\1\6\3\e\g\e\b\5\v\s\m\g\1\j\5\u\t\d\v\s\3\6\y\m\6\r\f\4\q\9\5\u\v\1\f\p\q\v\d\e\d\0\k\z\s\x\k\n\g\w\8\w\n\8\6\k\w\u\k\t\0\b\i\f\m\8\h\g\5\2\t\6\v\c\d\9\l\h\g\y\a\o\l\l\1\x\s\v\p\m\f\e\j\z\q\u\k\h\e\9\g\t\l\c\k\d\h\j\t\l\p\u\y\h\8\7\g\j\6\m\m\d\x\t\t\5\h\0\v\t\g\q\7\w\x\t\e\x\g\e\n\m\y\5\p\2\z\t\p\p\0\0\n\d\5\q\3\3\n\c\l\y\g\d\h\z\6\5\y\a\5\g\t ]] 00:07:55.066 21:15:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.066 21:15:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.066 [2024-11-28 21:15:18.771083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.066 [2024-11-28 21:15:18.771196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70253 ] 00:07:55.325 [2024-11-28 21:15:18.905417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.325 [2024-11-28 21:15:18.937652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.325  [2024-11-28T21:15:19.327Z] Copying: 512/512 [B] (average 250 kBps) 00:07:55.584 00:07:55.584 21:15:19 -- dd/posix.sh@93 -- # [[ seur8vjvfv9ng463kyasfu98htsemq0f14ngbqzgrchhpjftub0r7h63r4ko44yjf4uyz5ssvfz0qh9rgy606cdhda8og48gsnaam1cnlnsmb6fnb98pj8bxgx8545pveebm1ncnzxsorfuvudzv5s8zr1642yzm8p9lku25sbdd6nis99j0ds04gbcis5rhxxm51itgp6mnw02yfplwhcm9g09doj7i4d4fwb0vqubdb89mqkdi5kl3xqffnm20zzn5vyp9k2f1r8esrei48l51cgjp6pwfvjgclvp391vf9gsti4anlf5oldkmdl536zejjxd9zazrbtx203k163egeb5vsmg1j5utdvs36ym6rf4q95uv1fpqvded0kzsxkngw8wn86kwukt0bifm8hg52t6vcd9lhgyaoll1xsvpmfejzqukhe9gtlckdhjtlpuyh87gj6mmdxtt5h0vtgq7wxtexgenmy5p2ztpp00nd5q33nclygdhz65ya5gt == \s\e\u\r\8\v\j\v\f\v\9\n\g\4\6\3\k\y\a\s\f\u\9\8\h\t\s\e\m\q\0\f\1\4\n\g\b\q\z\g\r\c\h\h\p\j\f\t\u\b\0\r\7\h\6\3\r\4\k\o\4\4\y\j\f\4\u\y\z\5\s\s\v\f\z\0\q\h\9\r\g\y\6\0\6\c\d\h\d\a\8\o\g\4\8\g\s\n\a\a\m\1\c\n\l\n\s\m\b\6\f\n\b\9\8\p\j\8\b\x\g\x\8\5\4\5\p\v\e\e\b\m\1\n\c\n\z\x\s\o\r\f\u\v\u\d\z\v\5\s\8\z\r\1\6\4\2\y\z\m\8\p\9\l\k\u\2\5\s\b\d\d\6\n\i\s\9\9\j\0\d\s\0\4\g\b\c\i\s\5\r\h\x\x\m\5\1\i\t\g\p\6\m\n\w\0\2\y\f\p\l\w\h\c\m\9\g\0\9\d\o\j\7\i\4\d\4\f\w\b\0\v\q\u\b\d\b\8\9\m\q\k\d\i\5\k\l\3\x\q\f\f\n\m\2\0\z\z\n\5\v\y\p\9\k\2\f\1\r\8\e\s\r\e\i\4\8\l\5\1\c\g\j\p\6\p\w\f\v\j\g\c\l\v\p\3\9\1\v\f\9\g\s\t\i\4\a\n\l\f\5\o\l\d\k\m\d\l\5\3\6\z\e\j\j\x\d\9\z\a\z\r\b\t\x\2\0\3\k\1\6\3\e\g\e\b\5\v\s\m\g\1\j\5\u\t\d\v\s\3\6\y\m\6\r\f\4\q\9\5\u\v\1\f\p\q\v\d\e\d\0\k\z\s\x\k\n\g\w\8\w\n\8\6\k\w\u\k\t\0\b\i\f\m\8\h\g\5\2\t\6\v\c\d\9\l\h\g\y\a\o\l\l\1\x\s\v\p\m\f\e\j\z\q\u\k\h\e\9\g\t\l\c\k\d\h\j\t\l\p\u\y\h\8\7\g\j\6\m\m\d\x\t\t\5\h\0\v\t\g\q\7\w\x\t\e\x\g\e\n\m\y\5\p\2\z\t\p\p\0\0\n\d\5\q\3\3\n\c\l\y\g\d\h\z\6\5\y\a\5\g\t ]] 00:07:55.584 21:15:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.584 21:15:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:55.584 [2024-11-28 21:15:19.181441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.584 [2024-11-28 21:15:19.181572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70255 ] 00:07:55.584 [2024-11-28 21:15:19.319932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.843 [2024-11-28 21:15:19.351097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.843  [2024-11-28T21:15:19.586Z] Copying: 512/512 [B] (average 250 kBps) 00:07:55.843 00:07:55.844 21:15:19 -- dd/posix.sh@93 -- # [[ seur8vjvfv9ng463kyasfu98htsemq0f14ngbqzgrchhpjftub0r7h63r4ko44yjf4uyz5ssvfz0qh9rgy606cdhda8og48gsnaam1cnlnsmb6fnb98pj8bxgx8545pveebm1ncnzxsorfuvudzv5s8zr1642yzm8p9lku25sbdd6nis99j0ds04gbcis5rhxxm51itgp6mnw02yfplwhcm9g09doj7i4d4fwb0vqubdb89mqkdi5kl3xqffnm20zzn5vyp9k2f1r8esrei48l51cgjp6pwfvjgclvp391vf9gsti4anlf5oldkmdl536zejjxd9zazrbtx203k163egeb5vsmg1j5utdvs36ym6rf4q95uv1fpqvded0kzsxkngw8wn86kwukt0bifm8hg52t6vcd9lhgyaoll1xsvpmfejzqukhe9gtlckdhjtlpuyh87gj6mmdxtt5h0vtgq7wxtexgenmy5p2ztpp00nd5q33nclygdhz65ya5gt == \s\e\u\r\8\v\j\v\f\v\9\n\g\4\6\3\k\y\a\s\f\u\9\8\h\t\s\e\m\q\0\f\1\4\n\g\b\q\z\g\r\c\h\h\p\j\f\t\u\b\0\r\7\h\6\3\r\4\k\o\4\4\y\j\f\4\u\y\z\5\s\s\v\f\z\0\q\h\9\r\g\y\6\0\6\c\d\h\d\a\8\o\g\4\8\g\s\n\a\a\m\1\c\n\l\n\s\m\b\6\f\n\b\9\8\p\j\8\b\x\g\x\8\5\4\5\p\v\e\e\b\m\1\n\c\n\z\x\s\o\r\f\u\v\u\d\z\v\5\s\8\z\r\1\6\4\2\y\z\m\8\p\9\l\k\u\2\5\s\b\d\d\6\n\i\s\9\9\j\0\d\s\0\4\g\b\c\i\s\5\r\h\x\x\m\5\1\i\t\g\p\6\m\n\w\0\2\y\f\p\l\w\h\c\m\9\g\0\9\d\o\j\7\i\4\d\4\f\w\b\0\v\q\u\b\d\b\8\9\m\q\k\d\i\5\k\l\3\x\q\f\f\n\m\2\0\z\z\n\5\v\y\p\9\k\2\f\1\r\8\e\s\r\e\i\4\8\l\5\1\c\g\j\p\6\p\w\f\v\j\g\c\l\v\p\3\9\1\v\f\9\g\s\t\i\4\a\n\l\f\5\o\l\d\k\m\d\l\5\3\6\z\e\j\j\x\d\9\z\a\z\r\b\t\x\2\0\3\k\1\6\3\e\g\e\b\5\v\s\m\g\1\j\5\u\t\d\v\s\3\6\y\m\6\r\f\4\q\9\5\u\v\1\f\p\q\v\d\e\d\0\k\z\s\x\k\n\g\w\8\w\n\8\6\k\w\u\k\t\0\b\i\f\m\8\h\g\5\2\t\6\v\c\d\9\l\h\g\y\a\o\l\l\1\x\s\v\p\m\f\e\j\z\q\u\k\h\e\9\g\t\l\c\k\d\h\j\t\l\p\u\y\h\8\7\g\j\6\m\m\d\x\t\t\5\h\0\v\t\g\q\7\w\x\t\e\x\g\e\n\m\y\5\p\2\z\t\p\p\0\0\n\d\5\q\3\3\n\c\l\y\g\d\h\z\6\5\y\a\5\g\t ]] 00:07:55.844 00:07:55.844 real 0m3.301s 00:07:55.844 user 0m1.576s 00:07:55.844 sys 0m0.747s 00:07:55.844 21:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.844 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:07:55.844 ************************************ 00:07:55.844 END TEST dd_flags_misc_forced_aio 00:07:55.844 ************************************ 00:07:56.103 21:15:19 -- dd/posix.sh@1 -- # cleanup 00:07:56.103 21:15:19 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:56.103 21:15:19 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:56.103 ************************************ 00:07:56.103 END TEST spdk_dd_posix 00:07:56.103 ************************************ 00:07:56.103 00:07:56.103 real 0m15.558s 00:07:56.103 user 0m6.437s 00:07:56.103 sys 0m3.317s 00:07:56.103 21:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.103 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:07:56.103 21:15:19 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:56.103 21:15:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.103 21:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.103 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:07:56.103 ************************************ 00:07:56.103 START TEST spdk_dd_malloc 00:07:56.103 ************************************ 00:07:56.103 21:15:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:56.103 * Looking for test storage... 00:07:56.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.103 21:15:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.103 21:15:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.103 21:15:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.103 21:15:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.103 21:15:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.103 21:15:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.103 21:15:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.103 21:15:19 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.103 21:15:19 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.103 21:15:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.103 21:15:19 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.103 21:15:19 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.103 21:15:19 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.103 21:15:19 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.103 21:15:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.103 21:15:19 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.103 21:15:19 -- scripts/common.sh@344 -- # : 1 00:07:56.103 21:15:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.103 21:15:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.103 21:15:19 -- scripts/common.sh@364 -- # decimal 1 00:07:56.103 21:15:19 -- scripts/common.sh@352 -- # local d=1 00:07:56.103 21:15:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.103 21:15:19 -- scripts/common.sh@354 -- # echo 1 00:07:56.103 21:15:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.103 21:15:19 -- scripts/common.sh@365 -- # decimal 2 00:07:56.103 21:15:19 -- scripts/common.sh@352 -- # local d=2 00:07:56.103 21:15:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.103 21:15:19 -- scripts/common.sh@354 -- # echo 2 00:07:56.103 21:15:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.103 21:15:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.103 21:15:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.103 21:15:19 -- scripts/common.sh@367 -- # return 0 00:07:56.103 21:15:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.103 21:15:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.103 --rc genhtml_branch_coverage=1 00:07:56.103 --rc genhtml_function_coverage=1 00:07:56.103 --rc genhtml_legend=1 00:07:56.103 --rc geninfo_all_blocks=1 00:07:56.103 --rc geninfo_unexecuted_blocks=1 00:07:56.103 00:07:56.103 ' 00:07:56.103 21:15:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.103 --rc genhtml_branch_coverage=1 00:07:56.103 --rc genhtml_function_coverage=1 00:07:56.103 --rc genhtml_legend=1 00:07:56.103 --rc geninfo_all_blocks=1 00:07:56.103 --rc geninfo_unexecuted_blocks=1 00:07:56.103 00:07:56.103 ' 00:07:56.103 21:15:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.103 --rc genhtml_branch_coverage=1 00:07:56.103 --rc genhtml_function_coverage=1 00:07:56.103 --rc genhtml_legend=1 00:07:56.103 --rc geninfo_all_blocks=1 00:07:56.103 --rc geninfo_unexecuted_blocks=1 00:07:56.103 00:07:56.103 ' 00:07:56.103 21:15:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.103 --rc genhtml_branch_coverage=1 00:07:56.103 --rc genhtml_function_coverage=1 00:07:56.103 --rc genhtml_legend=1 00:07:56.103 --rc geninfo_all_blocks=1 00:07:56.103 --rc geninfo_unexecuted_blocks=1 00:07:56.103 00:07:56.103 ' 00:07:56.103 21:15:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.103 21:15:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.104 21:15:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.104 21:15:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.104 21:15:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.104 21:15:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.104 21:15:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.104 21:15:19 -- paths/export.sh@5 -- # export PATH 00:07:56.104 21:15:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.104 21:15:19 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:56.104 21:15:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.104 21:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.104 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:07:56.364 ************************************ 00:07:56.364 START TEST dd_malloc_copy 00:07:56.364 ************************************ 00:07:56.364 21:15:19 -- common/autotest_common.sh@1114 -- # malloc_copy 00:07:56.364 21:15:19 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:56.364 21:15:19 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:56.364 21:15:19 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:56.364 21:15:19 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:56.364 21:15:19 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:56.364 21:15:19 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:56.364 21:15:19 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:56.364 21:15:19 -- dd/malloc.sh@28 -- # gen_conf 00:07:56.364 21:15:19 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.364 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:07:56.364 [2024-11-28 21:15:19.894962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.364 [2024-11-28 21:15:19.895074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70336 ] 00:07:56.364 { 00:07:56.364 "subsystems": [ 00:07:56.364 { 00:07:56.364 "subsystem": "bdev", 00:07:56.364 "config": [ 00:07:56.364 { 00:07:56.364 "params": { 00:07:56.364 "block_size": 512, 00:07:56.364 "num_blocks": 1048576, 00:07:56.364 "name": "malloc0" 00:07:56.364 }, 00:07:56.364 "method": "bdev_malloc_create" 00:07:56.364 }, 00:07:56.364 { 00:07:56.364 "params": { 00:07:56.364 "block_size": 512, 00:07:56.364 "num_blocks": 1048576, 00:07:56.364 "name": "malloc1" 00:07:56.364 }, 00:07:56.364 "method": "bdev_malloc_create" 00:07:56.364 }, 00:07:56.364 { 00:07:56.364 "method": "bdev_wait_for_examine" 00:07:56.364 } 00:07:56.364 ] 00:07:56.364 } 00:07:56.364 ] 00:07:56.364 } 00:07:56.364 [2024-11-28 21:15:20.033490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.364 [2024-11-28 21:15:20.065601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.744  [2024-11-28T21:15:22.493Z] Copying: 236/512 [MB] (236 MBps) [2024-11-28T21:15:22.493Z] Copying: 474/512 [MB] (238 MBps) [2024-11-28T21:15:22.752Z] Copying: 512/512 [MB] (average 238 MBps) 00:07:59.009 00:07:59.009 21:15:22 -- dd/malloc.sh@33 -- # gen_conf 00:07:59.009 21:15:22 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:59.009 21:15:22 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.009 21:15:22 -- common/autotest_common.sh@10 -- # set +x 00:07:59.268 [2024-11-28 21:15:22.781026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.268 [2024-11-28 21:15:22.781108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70373 ] 00:07:59.268 { 00:07:59.268 "subsystems": [ 00:07:59.268 { 00:07:59.268 "subsystem": "bdev", 00:07:59.268 "config": [ 00:07:59.268 { 00:07:59.268 "params": { 00:07:59.268 "block_size": 512, 00:07:59.268 "num_blocks": 1048576, 00:07:59.268 "name": "malloc0" 00:07:59.268 }, 00:07:59.268 "method": "bdev_malloc_create" 00:07:59.268 }, 00:07:59.268 { 00:07:59.268 "params": { 00:07:59.268 "block_size": 512, 00:07:59.268 "num_blocks": 1048576, 00:07:59.268 "name": "malloc1" 00:07:59.268 }, 00:07:59.268 "method": "bdev_malloc_create" 00:07:59.268 }, 00:07:59.268 { 00:07:59.268 "method": "bdev_wait_for_examine" 00:07:59.268 } 00:07:59.268 ] 00:07:59.268 } 00:07:59.268 ] 00:07:59.268 } 00:07:59.268 [2024-11-28 21:15:22.915052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.268 [2024-11-28 21:15:22.959079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.647  [2024-11-28T21:15:25.327Z] Copying: 211/512 [MB] (211 MBps) [2024-11-28T21:15:25.584Z] Copying: 419/512 [MB] (207 MBps) [2024-11-28T21:15:26.155Z] Copying: 512/512 [MB] (average 214 MBps) 00:08:02.412 00:08:02.412 00:08:02.412 real 0m6.052s 00:08:02.412 user 0m5.409s 00:08:02.412 sys 0m0.488s 00:08:02.412 21:15:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.412 21:15:25 -- common/autotest_common.sh@10 -- # set +x 00:08:02.412 ************************************ 00:08:02.412 END TEST dd_malloc_copy 00:08:02.412 ************************************ 00:08:02.412 00:08:02.412 real 0m6.300s 00:08:02.412 user 0m5.548s 00:08:02.412 sys 0m0.601s 00:08:02.412 21:15:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.412 21:15:25 -- common/autotest_common.sh@10 -- # set +x 00:08:02.412 ************************************ 00:08:02.412 END TEST spdk_dd_malloc 00:08:02.412 ************************************ 00:08:02.412 21:15:25 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:02.412 21:15:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:02.412 21:15:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.412 21:15:25 -- common/autotest_common.sh@10 -- # set +x 00:08:02.412 ************************************ 00:08:02.412 START TEST spdk_dd_bdev_to_bdev 00:08:02.412 ************************************ 00:08:02.412 21:15:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:02.412 * Looking for test storage... 00:08:02.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.412 21:15:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.412 21:15:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.412 21:15:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.672 21:15:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.672 21:15:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.672 21:15:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.672 21:15:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.672 21:15:26 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.672 21:15:26 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.672 21:15:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.672 21:15:26 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.672 21:15:26 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.672 21:15:26 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.672 21:15:26 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.672 21:15:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.672 21:15:26 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.672 21:15:26 -- scripts/common.sh@344 -- # : 1 00:08:02.672 21:15:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.672 21:15:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.672 21:15:26 -- scripts/common.sh@364 -- # decimal 1 00:08:02.672 21:15:26 -- scripts/common.sh@352 -- # local d=1 00:08:02.672 21:15:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.672 21:15:26 -- scripts/common.sh@354 -- # echo 1 00:08:02.672 21:15:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.672 21:15:26 -- scripts/common.sh@365 -- # decimal 2 00:08:02.672 21:15:26 -- scripts/common.sh@352 -- # local d=2 00:08:02.672 21:15:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.672 21:15:26 -- scripts/common.sh@354 -- # echo 2 00:08:02.672 21:15:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.672 21:15:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.672 21:15:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.672 21:15:26 -- scripts/common.sh@367 -- # return 0 00:08:02.672 21:15:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.672 21:15:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.672 --rc genhtml_branch_coverage=1 00:08:02.672 --rc genhtml_function_coverage=1 00:08:02.672 --rc genhtml_legend=1 00:08:02.672 --rc geninfo_all_blocks=1 00:08:02.672 --rc geninfo_unexecuted_blocks=1 00:08:02.672 00:08:02.672 ' 00:08:02.672 21:15:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.672 --rc genhtml_branch_coverage=1 00:08:02.672 --rc genhtml_function_coverage=1 00:08:02.672 --rc genhtml_legend=1 00:08:02.672 --rc geninfo_all_blocks=1 00:08:02.672 --rc geninfo_unexecuted_blocks=1 00:08:02.672 00:08:02.672 ' 00:08:02.672 21:15:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.672 --rc genhtml_branch_coverage=1 00:08:02.672 --rc genhtml_function_coverage=1 00:08:02.672 --rc genhtml_legend=1 00:08:02.672 --rc geninfo_all_blocks=1 00:08:02.672 --rc geninfo_unexecuted_blocks=1 00:08:02.672 00:08:02.672 ' 00:08:02.672 21:15:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.672 --rc genhtml_branch_coverage=1 00:08:02.672 --rc genhtml_function_coverage=1 00:08:02.672 --rc genhtml_legend=1 00:08:02.672 --rc geninfo_all_blocks=1 00:08:02.672 --rc geninfo_unexecuted_blocks=1 00:08:02.672 00:08:02.672 ' 00:08:02.672 21:15:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.672 21:15:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.672 21:15:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.672 21:15:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.672 21:15:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.672 21:15:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.672 21:15:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.672 21:15:26 -- paths/export.sh@5 -- # export PATH 00:08:02.672 21:15:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:02.672 21:15:26 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:02.672 21:15:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:02.672 21:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.672 21:15:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.672 ************************************ 00:08:02.672 START TEST dd_inflate_file 00:08:02.672 ************************************ 00:08:02.672 21:15:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:02.673 [2024-11-28 21:15:26.240211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.673 [2024-11-28 21:15:26.240303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70484 ] 00:08:02.673 [2024-11-28 21:15:26.376451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.673 [2024-11-28 21:15:26.407717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.931  [2024-11-28T21:15:26.674Z] Copying: 64/64 [MB] (average 2370 MBps) 00:08:02.931 00:08:02.931 00:08:02.931 real 0m0.422s 00:08:02.931 user 0m0.185s 00:08:02.931 sys 0m0.118s 00:08:02.931 21:15:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.931 ************************************ 00:08:02.931 END TEST dd_inflate_file 00:08:02.931 ************************************ 00:08:02.932 21:15:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.932 21:15:26 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:02.932 21:15:26 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:02.932 21:15:26 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:02.932 21:15:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:02.932 21:15:26 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:02.932 21:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.932 21:15:26 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.932 21:15:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.932 21:15:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.932 ************************************ 00:08:02.932 START TEST dd_copy_to_out_bdev 00:08:02.932 ************************************ 00:08:02.932 21:15:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:03.191 [2024-11-28 21:15:26.719320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.191 [2024-11-28 21:15:26.719409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:08:03.191 { 00:08:03.191 "subsystems": [ 00:08:03.191 { 00:08:03.191 "subsystem": "bdev", 00:08:03.191 "config": [ 00:08:03.191 { 00:08:03.191 "params": { 00:08:03.191 "trtype": "pcie", 00:08:03.191 "traddr": "0000:00:06.0", 00:08:03.191 "name": "Nvme0" 00:08:03.191 }, 00:08:03.191 "method": "bdev_nvme_attach_controller" 00:08:03.191 }, 00:08:03.191 { 00:08:03.191 "params": { 00:08:03.191 "trtype": "pcie", 00:08:03.191 "traddr": "0000:00:07.0", 00:08:03.191 "name": "Nvme1" 00:08:03.191 }, 00:08:03.191 "method": "bdev_nvme_attach_controller" 00:08:03.191 }, 00:08:03.191 { 00:08:03.191 "method": "bdev_wait_for_examine" 00:08:03.191 } 00:08:03.191 ] 00:08:03.191 } 00:08:03.191 ] 00:08:03.191 } 00:08:03.191 [2024-11-28 21:15:26.856372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.191 [2024-11-28 21:15:26.892134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.570  [2024-11-28T21:15:28.573Z] Copying: 44/64 [MB] (44 MBps) [2024-11-28T21:15:28.832Z] Copying: 64/64 [MB] (average 44 MBps) 00:08:05.089 00:08:05.089 00:08:05.089 real 0m1.992s 00:08:05.089 user 0m1.771s 00:08:05.089 sys 0m0.154s 00:08:05.089 21:15:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.089 ************************************ 00:08:05.089 END TEST dd_copy_to_out_bdev 00:08:05.089 ************************************ 00:08:05.089 21:15:28 -- common/autotest_common.sh@10 -- # set +x 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:05.089 21:15:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.089 21:15:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.089 21:15:28 -- common/autotest_common.sh@10 -- # set +x 00:08:05.089 ************************************ 00:08:05.089 START TEST dd_offset_magic 00:08:05.089 ************************************ 00:08:05.089 21:15:28 -- common/autotest_common.sh@1114 -- # offset_magic 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:05.089 21:15:28 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:05.089 21:15:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.089 21:15:28 -- common/autotest_common.sh@10 -- # set +x 00:08:05.089 [2024-11-28 21:15:28.766926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.089 [2024-11-28 21:15:28.767035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:08:05.089 { 00:08:05.089 "subsystems": [ 00:08:05.089 { 00:08:05.089 "subsystem": "bdev", 00:08:05.089 "config": [ 00:08:05.089 { 00:08:05.089 "params": { 00:08:05.089 "trtype": "pcie", 00:08:05.089 "traddr": "0000:00:06.0", 00:08:05.089 "name": "Nvme0" 00:08:05.089 }, 00:08:05.089 "method": "bdev_nvme_attach_controller" 00:08:05.089 }, 00:08:05.089 { 00:08:05.089 "params": { 00:08:05.089 "trtype": "pcie", 00:08:05.089 "traddr": "0000:00:07.0", 00:08:05.089 "name": "Nvme1" 00:08:05.089 }, 00:08:05.089 "method": "bdev_nvme_attach_controller" 00:08:05.089 }, 00:08:05.089 { 00:08:05.089 "method": "bdev_wait_for_examine" 00:08:05.089 } 00:08:05.089 ] 00:08:05.089 } 00:08:05.089 ] 00:08:05.089 } 00:08:05.348 [2024-11-28 21:15:28.904913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.348 [2024-11-28 21:15:28.945966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.607  [2024-11-28T21:15:29.609Z] Copying: 65/65 [MB] (average 1000 MBps) 00:08:05.866 00:08:05.866 21:15:29 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:05.866 21:15:29 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:05.866 21:15:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.866 21:15:29 -- common/autotest_common.sh@10 -- # set +x 00:08:05.866 [2024-11-28 21:15:29.411170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.866 [2024-11-28 21:15:29.411267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70575 ] 00:08:05.866 { 00:08:05.866 "subsystems": [ 00:08:05.866 { 00:08:05.866 "subsystem": "bdev", 00:08:05.866 "config": [ 00:08:05.866 { 00:08:05.866 "params": { 00:08:05.866 "trtype": "pcie", 00:08:05.866 "traddr": "0000:00:06.0", 00:08:05.866 "name": "Nvme0" 00:08:05.866 }, 00:08:05.866 "method": "bdev_nvme_attach_controller" 00:08:05.866 }, 00:08:05.866 { 00:08:05.866 "params": { 00:08:05.866 "trtype": "pcie", 00:08:05.866 "traddr": "0000:00:07.0", 00:08:05.866 "name": "Nvme1" 00:08:05.866 }, 00:08:05.866 "method": "bdev_nvme_attach_controller" 00:08:05.866 }, 00:08:05.866 { 00:08:05.866 "method": "bdev_wait_for_examine" 00:08:05.866 } 00:08:05.866 ] 00:08:05.866 } 00:08:05.866 ] 00:08:05.866 } 00:08:05.866 [2024-11-28 21:15:29.546684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.866 [2024-11-28 21:15:29.577865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.125  [2024-11-28T21:15:30.127Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:06.384 00:08:06.384 21:15:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:06.384 21:15:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:06.384 21:15:29 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:06.384 21:15:29 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:06.384 21:15:29 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:06.384 21:15:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.384 21:15:29 -- common/autotest_common.sh@10 -- # set +x 00:08:06.384 [2024-11-28 21:15:29.958029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.384 [2024-11-28 21:15:29.958122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70593 ] 00:08:06.384 { 00:08:06.384 "subsystems": [ 00:08:06.384 { 00:08:06.384 "subsystem": "bdev", 00:08:06.384 "config": [ 00:08:06.384 { 00:08:06.384 "params": { 00:08:06.384 "trtype": "pcie", 00:08:06.384 "traddr": "0000:00:06.0", 00:08:06.384 "name": "Nvme0" 00:08:06.384 }, 00:08:06.384 "method": "bdev_nvme_attach_controller" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "params": { 00:08:06.384 "trtype": "pcie", 00:08:06.384 "traddr": "0000:00:07.0", 00:08:06.384 "name": "Nvme1" 00:08:06.384 }, 00:08:06.384 "method": "bdev_nvme_attach_controller" 00:08:06.384 }, 00:08:06.384 { 00:08:06.385 "method": "bdev_wait_for_examine" 00:08:06.385 } 00:08:06.385 ] 00:08:06.385 } 00:08:06.385 ] 00:08:06.385 } 00:08:06.385 [2024-11-28 21:15:30.096789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.643 [2024-11-28 21:15:30.135845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.643  [2024-11-28T21:15:30.644Z] Copying: 65/65 [MB] (average 1065 MBps) 00:08:06.902 00:08:06.902 21:15:30 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:06.902 21:15:30 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:06.902 21:15:30 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.902 21:15:30 -- common/autotest_common.sh@10 -- # set +x 00:08:06.902 [2024-11-28 21:15:30.587704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.902 [2024-11-28 21:15:30.587799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70613 ] 00:08:06.902 { 00:08:06.902 "subsystems": [ 00:08:06.902 { 00:08:06.902 "subsystem": "bdev", 00:08:06.902 "config": [ 00:08:06.902 { 00:08:06.902 "params": { 00:08:06.902 "trtype": "pcie", 00:08:06.902 "traddr": "0000:00:06.0", 00:08:06.902 "name": "Nvme0" 00:08:06.902 }, 00:08:06.902 "method": "bdev_nvme_attach_controller" 00:08:06.902 }, 00:08:06.902 { 00:08:06.902 "params": { 00:08:06.902 "trtype": "pcie", 00:08:06.902 "traddr": "0000:00:07.0", 00:08:06.902 "name": "Nvme1" 00:08:06.902 }, 00:08:06.902 "method": "bdev_nvme_attach_controller" 00:08:06.902 }, 00:08:06.902 { 00:08:06.902 "method": "bdev_wait_for_examine" 00:08:06.902 } 00:08:06.902 ] 00:08:06.902 } 00:08:06.902 ] 00:08:06.902 } 00:08:07.159 [2024-11-28 21:15:30.723586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.159 [2024-11-28 21:15:30.757024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.417  [2024-11-28T21:15:31.160Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:07.417 00:08:07.417 21:15:31 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:07.417 21:15:31 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:07.417 00:08:07.417 real 0m2.353s 00:08:07.417 user 0m1.664s 00:08:07.417 sys 0m0.486s 00:08:07.417 21:15:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.417 21:15:31 -- common/autotest_common.sh@10 -- # set +x 00:08:07.417 ************************************ 00:08:07.417 END TEST dd_offset_magic 00:08:07.417 ************************************ 00:08:07.417 21:15:31 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:07.417 21:15:31 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:07.417 21:15:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.417 21:15:31 -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.417 21:15:31 -- dd/common.sh@12 -- # local size=4194330 00:08:07.417 21:15:31 -- dd/common.sh@14 -- # local bs=1048576 00:08:07.417 21:15:31 -- dd/common.sh@15 -- # local count=5 00:08:07.417 21:15:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:07.417 21:15:31 -- dd/common.sh@18 -- # gen_conf 00:08:07.417 21:15:31 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.417 21:15:31 -- common/autotest_common.sh@10 -- # set +x 00:08:07.417 [2024-11-28 21:15:31.159162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.417 [2024-11-28 21:15:31.159254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70637 ] 00:08:07.676 { 00:08:07.676 "subsystems": [ 00:08:07.676 { 00:08:07.676 "subsystem": "bdev", 00:08:07.676 "config": [ 00:08:07.676 { 00:08:07.676 "params": { 00:08:07.676 "trtype": "pcie", 00:08:07.676 "traddr": "0000:00:06.0", 00:08:07.676 "name": "Nvme0" 00:08:07.676 }, 00:08:07.676 "method": "bdev_nvme_attach_controller" 00:08:07.676 }, 00:08:07.676 { 00:08:07.676 "params": { 00:08:07.676 "trtype": "pcie", 00:08:07.676 "traddr": "0000:00:07.0", 00:08:07.676 "name": "Nvme1" 00:08:07.676 }, 00:08:07.676 "method": "bdev_nvme_attach_controller" 00:08:07.676 }, 00:08:07.676 { 00:08:07.676 "method": "bdev_wait_for_examine" 00:08:07.676 } 00:08:07.676 ] 00:08:07.676 } 00:08:07.676 ] 00:08:07.676 } 00:08:07.676 [2024-11-28 21:15:31.297793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.676 [2024-11-28 21:15:31.333033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.935  [2024-11-28T21:15:31.678Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:07.935 00:08:07.935 21:15:31 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:07.935 21:15:31 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:07.935 21:15:31 -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.935 21:15:31 -- dd/common.sh@12 -- # local size=4194330 00:08:07.935 21:15:31 -- dd/common.sh@14 -- # local bs=1048576 00:08:07.935 21:15:31 -- dd/common.sh@15 -- # local count=5 00:08:07.935 21:15:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:07.935 21:15:31 -- dd/common.sh@18 -- # gen_conf 00:08:07.935 21:15:31 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.935 21:15:31 -- common/autotest_common.sh@10 -- # set +x 00:08:08.194 [2024-11-28 21:15:31.689248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:08.194 [2024-11-28 21:15:31.689342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70657 ] 00:08:08.194 { 00:08:08.194 "subsystems": [ 00:08:08.194 { 00:08:08.194 "subsystem": "bdev", 00:08:08.194 "config": [ 00:08:08.194 { 00:08:08.194 "params": { 00:08:08.194 "trtype": "pcie", 00:08:08.194 "traddr": "0000:00:06.0", 00:08:08.194 "name": "Nvme0" 00:08:08.194 }, 00:08:08.194 "method": "bdev_nvme_attach_controller" 00:08:08.194 }, 00:08:08.194 { 00:08:08.194 "params": { 00:08:08.194 "trtype": "pcie", 00:08:08.194 "traddr": "0000:00:07.0", 00:08:08.194 "name": "Nvme1" 00:08:08.194 }, 00:08:08.194 "method": "bdev_nvme_attach_controller" 00:08:08.194 }, 00:08:08.194 { 00:08:08.194 "method": "bdev_wait_for_examine" 00:08:08.194 } 00:08:08.194 ] 00:08:08.194 } 00:08:08.194 ] 00:08:08.194 } 00:08:08.194 [2024-11-28 21:15:31.826591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.194 [2024-11-28 21:15:31.857697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.454  [2024-11-28T21:15:32.197Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:08.454 00:08:08.454 21:15:32 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:08.454 00:08:08.454 real 0m6.188s 00:08:08.454 user 0m4.514s 00:08:08.454 sys 0m1.172s 00:08:08.454 21:15:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.454 21:15:32 -- common/autotest_common.sh@10 -- # set +x 00:08:08.454 ************************************ 00:08:08.454 END TEST spdk_dd_bdev_to_bdev 00:08:08.454 ************************************ 00:08:08.714 21:15:32 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:08.714 21:15:32 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:08.714 21:15:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.714 21:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.714 21:15:32 -- common/autotest_common.sh@10 -- # set +x 00:08:08.714 ************************************ 00:08:08.714 START TEST spdk_dd_uring 00:08:08.714 ************************************ 00:08:08.714 21:15:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:08.714 * Looking for test storage... 00:08:08.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:08.714 21:15:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.714 21:15:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.714 21:15:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.714 21:15:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.714 21:15:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.714 21:15:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.714 21:15:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.714 21:15:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.714 21:15:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.714 21:15:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.714 21:15:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.714 21:15:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.714 21:15:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.714 21:15:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.714 21:15:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.714 21:15:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.714 21:15:32 -- scripts/common.sh@344 -- # : 1 00:08:08.714 21:15:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.714 21:15:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.714 21:15:32 -- scripts/common.sh@364 -- # decimal 1 00:08:08.714 21:15:32 -- scripts/common.sh@352 -- # local d=1 00:08:08.714 21:15:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.714 21:15:32 -- scripts/common.sh@354 -- # echo 1 00:08:08.714 21:15:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.714 21:15:32 -- scripts/common.sh@365 -- # decimal 2 00:08:08.714 21:15:32 -- scripts/common.sh@352 -- # local d=2 00:08:08.714 21:15:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.714 21:15:32 -- scripts/common.sh@354 -- # echo 2 00:08:08.714 21:15:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.714 21:15:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.714 21:15:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.715 21:15:32 -- scripts/common.sh@367 -- # return 0 00:08:08.715 21:15:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.715 21:15:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.715 --rc genhtml_branch_coverage=1 00:08:08.715 --rc genhtml_function_coverage=1 00:08:08.715 --rc genhtml_legend=1 00:08:08.715 --rc geninfo_all_blocks=1 00:08:08.715 --rc geninfo_unexecuted_blocks=1 00:08:08.715 00:08:08.715 ' 00:08:08.715 21:15:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.715 --rc genhtml_branch_coverage=1 00:08:08.715 --rc genhtml_function_coverage=1 00:08:08.715 --rc genhtml_legend=1 00:08:08.715 --rc geninfo_all_blocks=1 00:08:08.715 --rc geninfo_unexecuted_blocks=1 00:08:08.715 00:08:08.715 ' 00:08:08.715 21:15:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.715 --rc genhtml_branch_coverage=1 00:08:08.715 --rc genhtml_function_coverage=1 00:08:08.715 --rc genhtml_legend=1 00:08:08.715 --rc geninfo_all_blocks=1 00:08:08.715 --rc geninfo_unexecuted_blocks=1 00:08:08.715 00:08:08.715 ' 00:08:08.715 21:15:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.715 --rc genhtml_branch_coverage=1 00:08:08.715 --rc genhtml_function_coverage=1 00:08:08.715 --rc genhtml_legend=1 00:08:08.715 --rc geninfo_all_blocks=1 00:08:08.715 --rc geninfo_unexecuted_blocks=1 00:08:08.715 00:08:08.715 ' 00:08:08.715 21:15:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.715 21:15:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.715 21:15:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.715 21:15:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.715 21:15:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.715 21:15:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.715 21:15:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.715 21:15:32 -- paths/export.sh@5 -- # export PATH 00:08:08.715 21:15:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.715 21:15:32 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:08.715 21:15:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.715 21:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.715 21:15:32 -- common/autotest_common.sh@10 -- # set +x 00:08:08.715 ************************************ 00:08:08.715 START TEST dd_uring_copy 00:08:08.715 ************************************ 00:08:08.715 21:15:32 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:08.715 21:15:32 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:08.715 21:15:32 -- dd/uring.sh@16 -- # local magic 00:08:08.715 21:15:32 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:08.715 21:15:32 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:08.715 21:15:32 -- dd/uring.sh@19 -- # local verify_magic 00:08:08.715 21:15:32 -- dd/uring.sh@21 -- # init_zram 00:08:08.715 21:15:32 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:08.715 21:15:32 -- dd/common.sh@164 -- # return 00:08:08.715 21:15:32 -- dd/uring.sh@22 -- # create_zram_dev 00:08:08.715 21:15:32 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:08.715 21:15:32 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:08.715 21:15:32 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:08.715 21:15:32 -- dd/common.sh@181 -- # local id=1 00:08:08.715 21:15:32 -- dd/common.sh@182 -- # local size=512M 00:08:08.715 21:15:32 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:08.715 21:15:32 -- dd/common.sh@186 -- # echo 512M 00:08:08.715 21:15:32 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:08.715 21:15:32 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:08.715 21:15:32 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:08.715 21:15:32 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:08.715 21:15:32 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:08.715 21:15:32 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:08.715 21:15:32 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:08.715 21:15:32 -- dd/common.sh@98 -- # xtrace_disable 00:08:08.715 21:15:32 -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 21:15:32 -- dd/uring.sh@41 -- # magic=zcamo1yewf2fdk7iwt7hjt7zwupwocezwarji620z860rij9ja4ztsuwgm9g32ptkym0qhp3fzopxdvzfb4yhuuwq9jh1dql319hu4lxz1of8dzrqvws3wcz04ce2ynbf0cye7ief5phguwqfnyjlorjf59phggn6asho4j2cxkeuzlz7ehlvptm0q77hwtgiy7pbxxsllljp9vu7vfob8tekgayr7deci3woke77ygd01yzz90z97y6f2psex1l9ri1ur9i2es6mklya7qnfhwzoshnhwnue9ew9o1d5zx1e7z04orz374ehp632g2bg29a0ko9v7yhjq526vc2gh39cze6s989jlgqz99tl3fgsn66uno0z8c86oqbigz1mt5030d0uv2pagg7p07gphrkzqhplmmxbe0zhn1hxn8a82ui8afpmumpdjl9gkepfiva1n7swucnynhd7bgbmls35176l49knnaczfb5miv7vo92mrnx3jfx5z4nckgukdgvlee6bbfdio4qjbwypzac897o2qe32olpmyw0ebuwj3sdqz5blcjkoe56ufrx0at5u7c83jbwf6rvzhf3yhyisk376tdc8bd842uiphht9uqo7npns6tluoxrj5dfcxywvauqgiwflpao2sehtk7j66398xwisuxxh55hvj0s9npt3ewuk8dki2mpsbm0g9e6qir0p9m7k2jqy21dwjthil8jwujbrmafldiw0ntqz85quwr6729nmfg3lnbnfjissuqut640z1uinjjc7u0gbcjge5p8zu7hzl3huos7szfyli0cooa2sq0edkry23l204uc5eakujf7jui4xyetos3g9kfccavgxcn2s8izzl1rmi13vbe4a7bxsmiphn21lperdqgn66wsd77ji4i9gu2i1ndhxmzw34zidpe4irkrii4pu480yyx5hsfvxly8xvguptf8gb761sxrh2yg98s6j4bleobo27gyzrytaokkevkbh315ymb9212t 00:08:08.975 21:15:32 -- dd/uring.sh@42 -- # echo zcamo1yewf2fdk7iwt7hjt7zwupwocezwarji620z860rij9ja4ztsuwgm9g32ptkym0qhp3fzopxdvzfb4yhuuwq9jh1dql319hu4lxz1of8dzrqvws3wcz04ce2ynbf0cye7ief5phguwqfnyjlorjf59phggn6asho4j2cxkeuzlz7ehlvptm0q77hwtgiy7pbxxsllljp9vu7vfob8tekgayr7deci3woke77ygd01yzz90z97y6f2psex1l9ri1ur9i2es6mklya7qnfhwzoshnhwnue9ew9o1d5zx1e7z04orz374ehp632g2bg29a0ko9v7yhjq526vc2gh39cze6s989jlgqz99tl3fgsn66uno0z8c86oqbigz1mt5030d0uv2pagg7p07gphrkzqhplmmxbe0zhn1hxn8a82ui8afpmumpdjl9gkepfiva1n7swucnynhd7bgbmls35176l49knnaczfb5miv7vo92mrnx3jfx5z4nckgukdgvlee6bbfdio4qjbwypzac897o2qe32olpmyw0ebuwj3sdqz5blcjkoe56ufrx0at5u7c83jbwf6rvzhf3yhyisk376tdc8bd842uiphht9uqo7npns6tluoxrj5dfcxywvauqgiwflpao2sehtk7j66398xwisuxxh55hvj0s9npt3ewuk8dki2mpsbm0g9e6qir0p9m7k2jqy21dwjthil8jwujbrmafldiw0ntqz85quwr6729nmfg3lnbnfjissuqut640z1uinjjc7u0gbcjge5p8zu7hzl3huos7szfyli0cooa2sq0edkry23l204uc5eakujf7jui4xyetos3g9kfccavgxcn2s8izzl1rmi13vbe4a7bxsmiphn21lperdqgn66wsd77ji4i9gu2i1ndhxmzw34zidpe4irkrii4pu480yyx5hsfvxly8xvguptf8gb761sxrh2yg98s6j4bleobo27gyzrytaokkevkbh315ymb9212t 00:08:08.975 21:15:32 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:08.975 [2024-11-28 21:15:32.504627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:08.975 [2024-11-28 21:15:32.504718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:08:08.975 [2024-11-28 21:15:32.640886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.975 [2024-11-28 21:15:32.672193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.543  [2024-11-28T21:15:33.286Z] Copying: 511/511 [MB] (average 1875 MBps) 00:08:09.543 00:08:09.803 21:15:33 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:09.803 21:15:33 -- dd/uring.sh@54 -- # gen_conf 00:08:09.803 21:15:33 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.803 21:15:33 -- common/autotest_common.sh@10 -- # set +x 00:08:09.803 [2024-11-28 21:15:33.334803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.803 [2024-11-28 21:15:33.334906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70736 ] 00:08:09.803 { 00:08:09.803 "subsystems": [ 00:08:09.803 { 00:08:09.803 "subsystem": "bdev", 00:08:09.803 "config": [ 00:08:09.803 { 00:08:09.803 "params": { 00:08:09.803 "block_size": 512, 00:08:09.803 "num_blocks": 1048576, 00:08:09.803 "name": "malloc0" 00:08:09.803 }, 00:08:09.803 "method": "bdev_malloc_create" 00:08:09.803 }, 00:08:09.803 { 00:08:09.803 "params": { 00:08:09.803 "filename": "/dev/zram1", 00:08:09.803 "name": "uring0" 00:08:09.803 }, 00:08:09.803 "method": "bdev_uring_create" 00:08:09.803 }, 00:08:09.803 { 00:08:09.803 "method": "bdev_wait_for_examine" 00:08:09.803 } 00:08:09.803 ] 00:08:09.803 } 00:08:09.803 ] 00:08:09.803 } 00:08:09.803 [2024-11-28 21:15:33.473865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.803 [2024-11-28 21:15:33.504807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.188  [2024-11-28T21:15:35.867Z] Copying: 198/512 [MB] (198 MBps) [2024-11-28T21:15:36.436Z] Copying: 404/512 [MB] (205 MBps) [2024-11-28T21:15:36.437Z] Copying: 512/512 [MB] (average 203 MBps) 00:08:12.694 00:08:12.694 21:15:36 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:12.694 21:15:36 -- dd/uring.sh@60 -- # gen_conf 00:08:12.694 21:15:36 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.694 21:15:36 -- common/autotest_common.sh@10 -- # set +x 00:08:12.953 [2024-11-28 21:15:36.443104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.953 [2024-11-28 21:15:36.443207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70779 ] 00:08:12.953 { 00:08:12.953 "subsystems": [ 00:08:12.953 { 00:08:12.953 "subsystem": "bdev", 00:08:12.953 "config": [ 00:08:12.953 { 00:08:12.953 "params": { 00:08:12.953 "block_size": 512, 00:08:12.953 "num_blocks": 1048576, 00:08:12.953 "name": "malloc0" 00:08:12.953 }, 00:08:12.953 "method": "bdev_malloc_create" 00:08:12.953 }, 00:08:12.953 { 00:08:12.953 "params": { 00:08:12.953 "filename": "/dev/zram1", 00:08:12.953 "name": "uring0" 00:08:12.953 }, 00:08:12.953 "method": "bdev_uring_create" 00:08:12.953 }, 00:08:12.953 { 00:08:12.953 "method": "bdev_wait_for_examine" 00:08:12.953 } 00:08:12.953 ] 00:08:12.953 } 00:08:12.953 ] 00:08:12.953 } 00:08:12.953 [2024-11-28 21:15:36.578679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.953 [2024-11-28 21:15:36.612786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.333  [2024-11-28T21:15:39.013Z] Copying: 159/512 [MB] (159 MBps) [2024-11-28T21:15:39.951Z] Copying: 299/512 [MB] (140 MBps) [2024-11-28T21:15:40.210Z] Copying: 455/512 [MB] (156 MBps) [2024-11-28T21:15:40.470Z] Copying: 512/512 [MB] (average 149 MBps) 00:08:16.727 00:08:16.727 21:15:40 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:16.727 21:15:40 -- dd/uring.sh@66 -- # [[ zcamo1yewf2fdk7iwt7hjt7zwupwocezwarji620z860rij9ja4ztsuwgm9g32ptkym0qhp3fzopxdvzfb4yhuuwq9jh1dql319hu4lxz1of8dzrqvws3wcz04ce2ynbf0cye7ief5phguwqfnyjlorjf59phggn6asho4j2cxkeuzlz7ehlvptm0q77hwtgiy7pbxxsllljp9vu7vfob8tekgayr7deci3woke77ygd01yzz90z97y6f2psex1l9ri1ur9i2es6mklya7qnfhwzoshnhwnue9ew9o1d5zx1e7z04orz374ehp632g2bg29a0ko9v7yhjq526vc2gh39cze6s989jlgqz99tl3fgsn66uno0z8c86oqbigz1mt5030d0uv2pagg7p07gphrkzqhplmmxbe0zhn1hxn8a82ui8afpmumpdjl9gkepfiva1n7swucnynhd7bgbmls35176l49knnaczfb5miv7vo92mrnx3jfx5z4nckgukdgvlee6bbfdio4qjbwypzac897o2qe32olpmyw0ebuwj3sdqz5blcjkoe56ufrx0at5u7c83jbwf6rvzhf3yhyisk376tdc8bd842uiphht9uqo7npns6tluoxrj5dfcxywvauqgiwflpao2sehtk7j66398xwisuxxh55hvj0s9npt3ewuk8dki2mpsbm0g9e6qir0p9m7k2jqy21dwjthil8jwujbrmafldiw0ntqz85quwr6729nmfg3lnbnfjissuqut640z1uinjjc7u0gbcjge5p8zu7hzl3huos7szfyli0cooa2sq0edkry23l204uc5eakujf7jui4xyetos3g9kfccavgxcn2s8izzl1rmi13vbe4a7bxsmiphn21lperdqgn66wsd77ji4i9gu2i1ndhxmzw34zidpe4irkrii4pu480yyx5hsfvxly8xvguptf8gb761sxrh2yg98s6j4bleobo27gyzrytaokkevkbh315ymb9212t == \z\c\a\m\o\1\y\e\w\f\2\f\d\k\7\i\w\t\7\h\j\t\7\z\w\u\p\w\o\c\e\z\w\a\r\j\i\6\2\0\z\8\6\0\r\i\j\9\j\a\4\z\t\s\u\w\g\m\9\g\3\2\p\t\k\y\m\0\q\h\p\3\f\z\o\p\x\d\v\z\f\b\4\y\h\u\u\w\q\9\j\h\1\d\q\l\3\1\9\h\u\4\l\x\z\1\o\f\8\d\z\r\q\v\w\s\3\w\c\z\0\4\c\e\2\y\n\b\f\0\c\y\e\7\i\e\f\5\p\h\g\u\w\q\f\n\y\j\l\o\r\j\f\5\9\p\h\g\g\n\6\a\s\h\o\4\j\2\c\x\k\e\u\z\l\z\7\e\h\l\v\p\t\m\0\q\7\7\h\w\t\g\i\y\7\p\b\x\x\s\l\l\l\j\p\9\v\u\7\v\f\o\b\8\t\e\k\g\a\y\r\7\d\e\c\i\3\w\o\k\e\7\7\y\g\d\0\1\y\z\z\9\0\z\9\7\y\6\f\2\p\s\e\x\1\l\9\r\i\1\u\r\9\i\2\e\s\6\m\k\l\y\a\7\q\n\f\h\w\z\o\s\h\n\h\w\n\u\e\9\e\w\9\o\1\d\5\z\x\1\e\7\z\0\4\o\r\z\3\7\4\e\h\p\6\3\2\g\2\b\g\2\9\a\0\k\o\9\v\7\y\h\j\q\5\2\6\v\c\2\g\h\3\9\c\z\e\6\s\9\8\9\j\l\g\q\z\9\9\t\l\3\f\g\s\n\6\6\u\n\o\0\z\8\c\8\6\o\q\b\i\g\z\1\m\t\5\0\3\0\d\0\u\v\2\p\a\g\g\7\p\0\7\g\p\h\r\k\z\q\h\p\l\m\m\x\b\e\0\z\h\n\1\h\x\n\8\a\8\2\u\i\8\a\f\p\m\u\m\p\d\j\l\9\g\k\e\p\f\i\v\a\1\n\7\s\w\u\c\n\y\n\h\d\7\b\g\b\m\l\s\3\5\1\7\6\l\4\9\k\n\n\a\c\z\f\b\5\m\i\v\7\v\o\9\2\m\r\n\x\3\j\f\x\5\z\4\n\c\k\g\u\k\d\g\v\l\e\e\6\b\b\f\d\i\o\4\q\j\b\w\y\p\z\a\c\8\9\7\o\2\q\e\3\2\o\l\p\m\y\w\0\e\b\u\w\j\3\s\d\q\z\5\b\l\c\j\k\o\e\5\6\u\f\r\x\0\a\t\5\u\7\c\8\3\j\b\w\f\6\r\v\z\h\f\3\y\h\y\i\s\k\3\7\6\t\d\c\8\b\d\8\4\2\u\i\p\h\h\t\9\u\q\o\7\n\p\n\s\6\t\l\u\o\x\r\j\5\d\f\c\x\y\w\v\a\u\q\g\i\w\f\l\p\a\o\2\s\e\h\t\k\7\j\6\6\3\9\8\x\w\i\s\u\x\x\h\5\5\h\v\j\0\s\9\n\p\t\3\e\w\u\k\8\d\k\i\2\m\p\s\b\m\0\g\9\e\6\q\i\r\0\p\9\m\7\k\2\j\q\y\2\1\d\w\j\t\h\i\l\8\j\w\u\j\b\r\m\a\f\l\d\i\w\0\n\t\q\z\8\5\q\u\w\r\6\7\2\9\n\m\f\g\3\l\n\b\n\f\j\i\s\s\u\q\u\t\6\4\0\z\1\u\i\n\j\j\c\7\u\0\g\b\c\j\g\e\5\p\8\z\u\7\h\z\l\3\h\u\o\s\7\s\z\f\y\l\i\0\c\o\o\a\2\s\q\0\e\d\k\r\y\2\3\l\2\0\4\u\c\5\e\a\k\u\j\f\7\j\u\i\4\x\y\e\t\o\s\3\g\9\k\f\c\c\a\v\g\x\c\n\2\s\8\i\z\z\l\1\r\m\i\1\3\v\b\e\4\a\7\b\x\s\m\i\p\h\n\2\1\l\p\e\r\d\q\g\n\6\6\w\s\d\7\7\j\i\4\i\9\g\u\2\i\1\n\d\h\x\m\z\w\3\4\z\i\d\p\e\4\i\r\k\r\i\i\4\p\u\4\8\0\y\y\x\5\h\s\f\v\x\l\y\8\x\v\g\u\p\t\f\8\g\b\7\6\1\s\x\r\h\2\y\g\9\8\s\6\j\4\b\l\e\o\b\o\2\7\g\y\z\r\y\t\a\o\k\k\e\v\k\b\h\3\1\5\y\m\b\9\2\1\2\t ]] 00:08:16.727 21:15:40 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:16.727 21:15:40 -- dd/uring.sh@69 -- # [[ zcamo1yewf2fdk7iwt7hjt7zwupwocezwarji620z860rij9ja4ztsuwgm9g32ptkym0qhp3fzopxdvzfb4yhuuwq9jh1dql319hu4lxz1of8dzrqvws3wcz04ce2ynbf0cye7ief5phguwqfnyjlorjf59phggn6asho4j2cxkeuzlz7ehlvptm0q77hwtgiy7pbxxsllljp9vu7vfob8tekgayr7deci3woke77ygd01yzz90z97y6f2psex1l9ri1ur9i2es6mklya7qnfhwzoshnhwnue9ew9o1d5zx1e7z04orz374ehp632g2bg29a0ko9v7yhjq526vc2gh39cze6s989jlgqz99tl3fgsn66uno0z8c86oqbigz1mt5030d0uv2pagg7p07gphrkzqhplmmxbe0zhn1hxn8a82ui8afpmumpdjl9gkepfiva1n7swucnynhd7bgbmls35176l49knnaczfb5miv7vo92mrnx3jfx5z4nckgukdgvlee6bbfdio4qjbwypzac897o2qe32olpmyw0ebuwj3sdqz5blcjkoe56ufrx0at5u7c83jbwf6rvzhf3yhyisk376tdc8bd842uiphht9uqo7npns6tluoxrj5dfcxywvauqgiwflpao2sehtk7j66398xwisuxxh55hvj0s9npt3ewuk8dki2mpsbm0g9e6qir0p9m7k2jqy21dwjthil8jwujbrmafldiw0ntqz85quwr6729nmfg3lnbnfjissuqut640z1uinjjc7u0gbcjge5p8zu7hzl3huos7szfyli0cooa2sq0edkry23l204uc5eakujf7jui4xyetos3g9kfccavgxcn2s8izzl1rmi13vbe4a7bxsmiphn21lperdqgn66wsd77ji4i9gu2i1ndhxmzw34zidpe4irkrii4pu480yyx5hsfvxly8xvguptf8gb761sxrh2yg98s6j4bleobo27gyzrytaokkevkbh315ymb9212t == \z\c\a\m\o\1\y\e\w\f\2\f\d\k\7\i\w\t\7\h\j\t\7\z\w\u\p\w\o\c\e\z\w\a\r\j\i\6\2\0\z\8\6\0\r\i\j\9\j\a\4\z\t\s\u\w\g\m\9\g\3\2\p\t\k\y\m\0\q\h\p\3\f\z\o\p\x\d\v\z\f\b\4\y\h\u\u\w\q\9\j\h\1\d\q\l\3\1\9\h\u\4\l\x\z\1\o\f\8\d\z\r\q\v\w\s\3\w\c\z\0\4\c\e\2\y\n\b\f\0\c\y\e\7\i\e\f\5\p\h\g\u\w\q\f\n\y\j\l\o\r\j\f\5\9\p\h\g\g\n\6\a\s\h\o\4\j\2\c\x\k\e\u\z\l\z\7\e\h\l\v\p\t\m\0\q\7\7\h\w\t\g\i\y\7\p\b\x\x\s\l\l\l\j\p\9\v\u\7\v\f\o\b\8\t\e\k\g\a\y\r\7\d\e\c\i\3\w\o\k\e\7\7\y\g\d\0\1\y\z\z\9\0\z\9\7\y\6\f\2\p\s\e\x\1\l\9\r\i\1\u\r\9\i\2\e\s\6\m\k\l\y\a\7\q\n\f\h\w\z\o\s\h\n\h\w\n\u\e\9\e\w\9\o\1\d\5\z\x\1\e\7\z\0\4\o\r\z\3\7\4\e\h\p\6\3\2\g\2\b\g\2\9\a\0\k\o\9\v\7\y\h\j\q\5\2\6\v\c\2\g\h\3\9\c\z\e\6\s\9\8\9\j\l\g\q\z\9\9\t\l\3\f\g\s\n\6\6\u\n\o\0\z\8\c\8\6\o\q\b\i\g\z\1\m\t\5\0\3\0\d\0\u\v\2\p\a\g\g\7\p\0\7\g\p\h\r\k\z\q\h\p\l\m\m\x\b\e\0\z\h\n\1\h\x\n\8\a\8\2\u\i\8\a\f\p\m\u\m\p\d\j\l\9\g\k\e\p\f\i\v\a\1\n\7\s\w\u\c\n\y\n\h\d\7\b\g\b\m\l\s\3\5\1\7\6\l\4\9\k\n\n\a\c\z\f\b\5\m\i\v\7\v\o\9\2\m\r\n\x\3\j\f\x\5\z\4\n\c\k\g\u\k\d\g\v\l\e\e\6\b\b\f\d\i\o\4\q\j\b\w\y\p\z\a\c\8\9\7\o\2\q\e\3\2\o\l\p\m\y\w\0\e\b\u\w\j\3\s\d\q\z\5\b\l\c\j\k\o\e\5\6\u\f\r\x\0\a\t\5\u\7\c\8\3\j\b\w\f\6\r\v\z\h\f\3\y\h\y\i\s\k\3\7\6\t\d\c\8\b\d\8\4\2\u\i\p\h\h\t\9\u\q\o\7\n\p\n\s\6\t\l\u\o\x\r\j\5\d\f\c\x\y\w\v\a\u\q\g\i\w\f\l\p\a\o\2\s\e\h\t\k\7\j\6\6\3\9\8\x\w\i\s\u\x\x\h\5\5\h\v\j\0\s\9\n\p\t\3\e\w\u\k\8\d\k\i\2\m\p\s\b\m\0\g\9\e\6\q\i\r\0\p\9\m\7\k\2\j\q\y\2\1\d\w\j\t\h\i\l\8\j\w\u\j\b\r\m\a\f\l\d\i\w\0\n\t\q\z\8\5\q\u\w\r\6\7\2\9\n\m\f\g\3\l\n\b\n\f\j\i\s\s\u\q\u\t\6\4\0\z\1\u\i\n\j\j\c\7\u\0\g\b\c\j\g\e\5\p\8\z\u\7\h\z\l\3\h\u\o\s\7\s\z\f\y\l\i\0\c\o\o\a\2\s\q\0\e\d\k\r\y\2\3\l\2\0\4\u\c\5\e\a\k\u\j\f\7\j\u\i\4\x\y\e\t\o\s\3\g\9\k\f\c\c\a\v\g\x\c\n\2\s\8\i\z\z\l\1\r\m\i\1\3\v\b\e\4\a\7\b\x\s\m\i\p\h\n\2\1\l\p\e\r\d\q\g\n\6\6\w\s\d\7\7\j\i\4\i\9\g\u\2\i\1\n\d\h\x\m\z\w\3\4\z\i\d\p\e\4\i\r\k\r\i\i\4\p\u\4\8\0\y\y\x\5\h\s\f\v\x\l\y\8\x\v\g\u\p\t\f\8\g\b\7\6\1\s\x\r\h\2\y\g\9\8\s\6\j\4\b\l\e\o\b\o\2\7\g\y\z\r\y\t\a\o\k\k\e\v\k\b\h\3\1\5\y\m\b\9\2\1\2\t ]] 00:08:16.727 21:15:40 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:17.295 21:15:40 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:17.295 21:15:40 -- dd/uring.sh@75 -- # gen_conf 00:08:17.295 21:15:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:17.295 21:15:40 -- common/autotest_common.sh@10 -- # set +x 00:08:17.295 [2024-11-28 21:15:40.830601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.295 [2024-11-28 21:15:40.830713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70851 ] 00:08:17.295 { 00:08:17.295 "subsystems": [ 00:08:17.295 { 00:08:17.295 "subsystem": "bdev", 00:08:17.295 "config": [ 00:08:17.295 { 00:08:17.295 "params": { 00:08:17.295 "block_size": 512, 00:08:17.295 "num_blocks": 1048576, 00:08:17.295 "name": "malloc0" 00:08:17.295 }, 00:08:17.295 "method": "bdev_malloc_create" 00:08:17.295 }, 00:08:17.295 { 00:08:17.295 "params": { 00:08:17.295 "filename": "/dev/zram1", 00:08:17.295 "name": "uring0" 00:08:17.295 }, 00:08:17.295 "method": "bdev_uring_create" 00:08:17.295 }, 00:08:17.295 { 00:08:17.295 "method": "bdev_wait_for_examine" 00:08:17.295 } 00:08:17.295 ] 00:08:17.295 } 00:08:17.295 ] 00:08:17.296 } 00:08:17.296 [2024-11-28 21:15:40.969113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.296 [2024-11-28 21:15:41.002347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.672  [2024-11-28T21:15:43.359Z] Copying: 173/512 [MB] (173 MBps) [2024-11-28T21:15:44.297Z] Copying: 345/512 [MB] (172 MBps) [2024-11-28T21:15:44.297Z] Copying: 512/512 [MB] (average 173 MBps) 00:08:20.554 00:08:20.814 21:15:44 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:20.814 21:15:44 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:20.814 21:15:44 -- dd/uring.sh@87 -- # : 00:08:20.814 21:15:44 -- dd/uring.sh@87 -- # : 00:08:20.814 21:15:44 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:20.814 21:15:44 -- dd/uring.sh@87 -- # gen_conf 00:08:20.814 21:15:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.814 21:15:44 -- common/autotest_common.sh@10 -- # set +x 00:08:20.814 [2024-11-28 21:15:44.351343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.814 [2024-11-28 21:15:44.351871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:08:20.814 { 00:08:20.814 "subsystems": [ 00:08:20.814 { 00:08:20.814 "subsystem": "bdev", 00:08:20.814 "config": [ 00:08:20.814 { 00:08:20.814 "params": { 00:08:20.814 "block_size": 512, 00:08:20.814 "num_blocks": 1048576, 00:08:20.814 "name": "malloc0" 00:08:20.814 }, 00:08:20.814 "method": "bdev_malloc_create" 00:08:20.814 }, 00:08:20.814 { 00:08:20.814 "params": { 00:08:20.814 "filename": "/dev/zram1", 00:08:20.814 "name": "uring0" 00:08:20.814 }, 00:08:20.814 "method": "bdev_uring_create" 00:08:20.814 }, 00:08:20.814 { 00:08:20.814 "params": { 00:08:20.814 "name": "uring0" 00:08:20.814 }, 00:08:20.814 "method": "bdev_uring_delete" 00:08:20.814 }, 00:08:20.814 { 00:08:20.814 "method": "bdev_wait_for_examine" 00:08:20.814 } 00:08:20.814 ] 00:08:20.814 } 00:08:20.814 ] 00:08:20.814 } 00:08:20.814 [2024-11-28 21:15:44.487222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.814 [2024-11-28 21:15:44.516039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.073  [2024-11-28T21:15:45.076Z] Copying: 0/0 [B] (average 0 Bps) 00:08:21.333 00:08:21.333 21:15:44 -- dd/uring.sh@94 -- # : 00:08:21.333 21:15:44 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:21.333 21:15:44 -- dd/uring.sh@94 -- # gen_conf 00:08:21.333 21:15:44 -- common/autotest_common.sh@650 -- # local es=0 00:08:21.333 21:15:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:21.333 21:15:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:21.333 21:15:44 -- common/autotest_common.sh@10 -- # set +x 00:08:21.333 21:15:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.333 21:15:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.333 21:15:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.333 21:15:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.333 21:15:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.333 21:15:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.333 21:15:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.333 21:15:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.333 21:15:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:21.333 [2024-11-28 21:15:44.955395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.333 [2024-11-28 21:15:44.955485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70920 ] 00:08:21.333 { 00:08:21.333 "subsystems": [ 00:08:21.333 { 00:08:21.333 "subsystem": "bdev", 00:08:21.333 "config": [ 00:08:21.333 { 00:08:21.333 "params": { 00:08:21.333 "block_size": 512, 00:08:21.333 "num_blocks": 1048576, 00:08:21.333 "name": "malloc0" 00:08:21.333 }, 00:08:21.333 "method": "bdev_malloc_create" 00:08:21.333 }, 00:08:21.333 { 00:08:21.333 "params": { 00:08:21.333 "filename": "/dev/zram1", 00:08:21.333 "name": "uring0" 00:08:21.333 }, 00:08:21.333 "method": "bdev_uring_create" 00:08:21.333 }, 00:08:21.333 { 00:08:21.333 "params": { 00:08:21.333 "name": "uring0" 00:08:21.333 }, 00:08:21.333 "method": "bdev_uring_delete" 00:08:21.333 }, 00:08:21.333 { 00:08:21.333 "method": "bdev_wait_for_examine" 00:08:21.333 } 00:08:21.333 ] 00:08:21.333 } 00:08:21.333 ] 00:08:21.333 } 00:08:21.596 [2024-11-28 21:15:45.092149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.596 [2024-11-28 21:15:45.122636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.596 [2024-11-28 21:15:45.267044] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:21.596 [2024-11-28 21:15:45.267111] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:21.596 [2024-11-28 21:15:45.267124] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:21.596 [2024-11-28 21:15:45.267134] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.858 [2024-11-28 21:15:45.446560] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:21.858 21:15:45 -- common/autotest_common.sh@653 -- # es=237 00:08:21.858 21:15:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.858 21:15:45 -- common/autotest_common.sh@662 -- # es=109 00:08:21.858 21:15:45 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:21.858 21:15:45 -- common/autotest_common.sh@670 -- # es=1 00:08:21.858 21:15:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.858 21:15:45 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:21.858 21:15:45 -- dd/common.sh@172 -- # local id=1 00:08:21.858 21:15:45 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:21.858 21:15:45 -- dd/common.sh@176 -- # echo 1 00:08:21.858 21:15:45 -- dd/common.sh@177 -- # echo 1 00:08:21.858 21:15:45 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:22.117 00:08:22.117 real 0m13.348s 00:08:22.117 user 0m7.559s 00:08:22.117 sys 0m5.202s 00:08:22.117 21:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.117 21:15:45 -- common/autotest_common.sh@10 -- # set +x 00:08:22.117 ************************************ 00:08:22.117 END TEST dd_uring_copy 00:08:22.117 ************************************ 00:08:22.117 ************************************ 00:08:22.117 END TEST spdk_dd_uring 00:08:22.117 ************************************ 00:08:22.117 00:08:22.117 real 0m13.588s 00:08:22.117 user 0m7.691s 00:08:22.117 sys 0m5.313s 00:08:22.117 21:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.117 21:15:45 -- common/autotest_common.sh@10 -- # set +x 00:08:22.377 21:15:45 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:22.377 21:15:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.377 21:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.377 21:15:45 -- common/autotest_common.sh@10 -- # set +x 00:08:22.377 ************************************ 00:08:22.377 START TEST spdk_dd_sparse 00:08:22.377 ************************************ 00:08:22.377 21:15:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:22.377 * Looking for test storage... 00:08:22.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.377 21:15:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.377 21:15:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.377 21:15:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.377 21:15:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.377 21:15:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.377 21:15:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.377 21:15:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.377 21:15:46 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.377 21:15:46 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.377 21:15:46 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.377 21:15:46 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.377 21:15:46 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.377 21:15:46 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.377 21:15:46 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.377 21:15:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.377 21:15:46 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.377 21:15:46 -- scripts/common.sh@344 -- # : 1 00:08:22.377 21:15:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.377 21:15:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.377 21:15:46 -- scripts/common.sh@364 -- # decimal 1 00:08:22.377 21:15:46 -- scripts/common.sh@352 -- # local d=1 00:08:22.377 21:15:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.377 21:15:46 -- scripts/common.sh@354 -- # echo 1 00:08:22.377 21:15:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.377 21:15:46 -- scripts/common.sh@365 -- # decimal 2 00:08:22.377 21:15:46 -- scripts/common.sh@352 -- # local d=2 00:08:22.377 21:15:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.377 21:15:46 -- scripts/common.sh@354 -- # echo 2 00:08:22.377 21:15:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.377 21:15:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.377 21:15:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.377 21:15:46 -- scripts/common.sh@367 -- # return 0 00:08:22.377 21:15:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.377 21:15:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 21:15:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 21:15:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 21:15:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 21:15:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.377 21:15:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.377 21:15:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.377 21:15:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.377 21:15:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.378 21:15:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.378 21:15:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.378 21:15:46 -- paths/export.sh@5 -- # export PATH 00:08:22.378 21:15:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.378 21:15:46 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:22.378 21:15:46 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:22.378 21:15:46 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:22.378 21:15:46 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:22.378 21:15:46 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:22.378 21:15:46 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:22.378 21:15:46 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:22.378 21:15:46 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:22.378 21:15:46 -- dd/sparse.sh@118 -- # prepare 00:08:22.378 21:15:46 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:22.378 21:15:46 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:22.378 1+0 records in 00:08:22.378 1+0 records out 00:08:22.378 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00614233 s, 683 MB/s 00:08:22.378 21:15:46 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:22.378 1+0 records in 00:08:22.378 1+0 records out 00:08:22.378 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00568044 s, 738 MB/s 00:08:22.378 21:15:46 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:22.378 1+0 records in 00:08:22.378 1+0 records out 00:08:22.378 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00553002 s, 758 MB/s 00:08:22.378 21:15:46 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:22.378 21:15:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.378 21:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.378 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.378 ************************************ 00:08:22.378 START TEST dd_sparse_file_to_file 00:08:22.378 ************************************ 00:08:22.378 21:15:46 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:22.378 21:15:46 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:22.378 21:15:46 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:22.378 21:15:46 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:22.378 21:15:46 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:22.378 21:15:46 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:22.378 21:15:46 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:22.378 21:15:46 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:22.378 21:15:46 -- dd/sparse.sh@41 -- # gen_conf 00:08:22.378 21:15:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.378 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.637 [2024-11-28 21:15:46.150144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.637 [2024-11-28 21:15:46.150249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71018 ] 00:08:22.637 { 00:08:22.637 "subsystems": [ 00:08:22.637 { 00:08:22.637 "subsystem": "bdev", 00:08:22.637 "config": [ 00:08:22.637 { 00:08:22.637 "params": { 00:08:22.637 "block_size": 4096, 00:08:22.637 "filename": "dd_sparse_aio_disk", 00:08:22.637 "name": "dd_aio" 00:08:22.637 }, 00:08:22.637 "method": "bdev_aio_create" 00:08:22.637 }, 00:08:22.637 { 00:08:22.637 "params": { 00:08:22.637 "lvs_name": "dd_lvstore", 00:08:22.637 "bdev_name": "dd_aio" 00:08:22.637 }, 00:08:22.637 "method": "bdev_lvol_create_lvstore" 00:08:22.637 }, 00:08:22.637 { 00:08:22.637 "method": "bdev_wait_for_examine" 00:08:22.637 } 00:08:22.637 ] 00:08:22.637 } 00:08:22.637 ] 00:08:22.637 } 00:08:22.637 [2024-11-28 21:15:46.287667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.637 [2024-11-28 21:15:46.317810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.897  [2024-11-28T21:15:46.640Z] Copying: 12/36 [MB] (average 2000 MBps) 00:08:22.897 00:08:22.897 21:15:46 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:22.897 21:15:46 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:22.897 21:15:46 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:22.897 21:15:46 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:22.897 21:15:46 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:22.897 21:15:46 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:22.897 21:15:46 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:22.897 21:15:46 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:22.897 21:15:46 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:22.897 21:15:46 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:22.897 00:08:22.897 real 0m0.490s 00:08:22.897 user 0m0.269s 00:08:22.897 sys 0m0.134s 00:08:22.897 21:15:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.897 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.897 ************************************ 00:08:22.897 END TEST dd_sparse_file_to_file 00:08:22.897 ************************************ 00:08:22.897 21:15:46 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:22.897 21:15:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.897 21:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.897 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.157 ************************************ 00:08:23.157 START TEST dd_sparse_file_to_bdev 00:08:23.157 ************************************ 00:08:23.157 21:15:46 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:23.157 21:15:46 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:23.157 21:15:46 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:23.157 21:15:46 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:23.157 21:15:46 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:23.157 21:15:46 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:23.157 21:15:46 -- dd/sparse.sh@73 -- # gen_conf 00:08:23.157 21:15:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.157 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.157 [2024-11-28 21:15:46.683812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:23.157 [2024-11-28 21:15:46.683950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:08:23.157 { 00:08:23.157 "subsystems": [ 00:08:23.157 { 00:08:23.157 "subsystem": "bdev", 00:08:23.157 "config": [ 00:08:23.157 { 00:08:23.157 "params": { 00:08:23.157 "block_size": 4096, 00:08:23.157 "filename": "dd_sparse_aio_disk", 00:08:23.157 "name": "dd_aio" 00:08:23.157 }, 00:08:23.157 "method": "bdev_aio_create" 00:08:23.157 }, 00:08:23.157 { 00:08:23.157 "params": { 00:08:23.157 "lvs_name": "dd_lvstore", 00:08:23.157 "lvol_name": "dd_lvol", 00:08:23.157 "size": 37748736, 00:08:23.157 "thin_provision": true 00:08:23.157 }, 00:08:23.157 "method": "bdev_lvol_create" 00:08:23.157 }, 00:08:23.157 { 00:08:23.157 "method": "bdev_wait_for_examine" 00:08:23.157 } 00:08:23.157 ] 00:08:23.157 } 00:08:23.157 ] 00:08:23.157 } 00:08:23.157 [2024-11-28 21:15:46.815708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.157 [2024-11-28 21:15:46.844951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.157 [2024-11-28 21:15:46.896997] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:23.416  [2024-11-28T21:15:47.159Z] Copying: 12/36 [MB] (average 500 MBps)[2024-11-28 21:15:46.936520] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:23.416 00:08:23.416 00:08:23.416 00:08:23.416 real 0m0.464s 00:08:23.416 user 0m0.284s 00:08:23.416 sys 0m0.104s 00:08:23.416 21:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.416 ************************************ 00:08:23.416 END TEST dd_sparse_file_to_bdev 00:08:23.416 ************************************ 00:08:23.416 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.416 21:15:47 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:23.416 21:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.416 21:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.416 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.416 ************************************ 00:08:23.416 START TEST dd_sparse_bdev_to_file 00:08:23.416 ************************************ 00:08:23.416 21:15:47 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:23.416 21:15:47 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:23.416 21:15:47 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:23.416 21:15:47 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:23.416 21:15:47 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:23.416 21:15:47 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:23.416 21:15:47 -- dd/sparse.sh@91 -- # gen_conf 00:08:23.416 21:15:47 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.416 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.675 [2024-11-28 21:15:47.200381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:23.675 [2024-11-28 21:15:47.200484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:08:23.675 { 00:08:23.675 "subsystems": [ 00:08:23.675 { 00:08:23.675 "subsystem": "bdev", 00:08:23.675 "config": [ 00:08:23.675 { 00:08:23.675 "params": { 00:08:23.675 "block_size": 4096, 00:08:23.675 "filename": "dd_sparse_aio_disk", 00:08:23.675 "name": "dd_aio" 00:08:23.675 }, 00:08:23.675 "method": "bdev_aio_create" 00:08:23.675 }, 00:08:23.675 { 00:08:23.675 "method": "bdev_wait_for_examine" 00:08:23.675 } 00:08:23.675 ] 00:08:23.675 } 00:08:23.675 ] 00:08:23.675 } 00:08:23.675 [2024-11-28 21:15:47.336561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.675 [2024-11-28 21:15:47.365760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.934  [2024-11-28T21:15:47.677Z] Copying: 12/36 [MB] (average 1333 MBps) 00:08:23.934 00:08:23.934 21:15:47 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:23.934 21:15:47 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:23.934 21:15:47 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:23.934 21:15:47 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:23.934 21:15:47 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:23.934 21:15:47 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:23.934 21:15:47 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:23.934 21:15:47 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:23.934 21:15:47 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:23.934 21:15:47 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:23.934 00:08:23.934 real 0m0.475s 00:08:23.934 user 0m0.279s 00:08:23.934 sys 0m0.121s 00:08:23.934 21:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.934 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 ************************************ 00:08:23.934 END TEST dd_sparse_bdev_to_file 00:08:23.934 ************************************ 00:08:23.934 21:15:47 -- dd/sparse.sh@1 -- # cleanup 00:08:23.934 21:15:47 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:23.934 21:15:47 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:24.194 21:15:47 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:24.194 21:15:47 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:24.194 00:08:24.194 real 0m1.822s 00:08:24.194 user 0m1.009s 00:08:24.194 sys 0m0.570s 00:08:24.194 21:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.194 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:24.194 ************************************ 00:08:24.194 END TEST spdk_dd_sparse 00:08:24.194 ************************************ 00:08:24.194 21:15:47 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:24.194 21:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.194 21:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.194 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:24.194 ************************************ 00:08:24.194 START TEST spdk_dd_negative 00:08:24.194 ************************************ 00:08:24.194 21:15:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:24.194 * Looking for test storage... 00:08:24.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:24.194 21:15:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:24.194 21:15:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:24.194 21:15:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:24.194 21:15:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:24.194 21:15:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:24.194 21:15:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:24.194 21:15:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:24.194 21:15:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:24.194 21:15:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:24.194 21:15:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.194 21:15:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:24.194 21:15:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:24.194 21:15:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:24.194 21:15:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:24.194 21:15:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:24.194 21:15:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:24.194 21:15:47 -- scripts/common.sh@344 -- # : 1 00:08:24.194 21:15:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:24.194 21:15:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.194 21:15:47 -- scripts/common.sh@364 -- # decimal 1 00:08:24.194 21:15:47 -- scripts/common.sh@352 -- # local d=1 00:08:24.194 21:15:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.194 21:15:47 -- scripts/common.sh@354 -- # echo 1 00:08:24.194 21:15:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:24.194 21:15:47 -- scripts/common.sh@365 -- # decimal 2 00:08:24.194 21:15:47 -- scripts/common.sh@352 -- # local d=2 00:08:24.194 21:15:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.194 21:15:47 -- scripts/common.sh@354 -- # echo 2 00:08:24.194 21:15:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:24.194 21:15:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:24.194 21:15:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:24.194 21:15:47 -- scripts/common.sh@367 -- # return 0 00:08:24.194 21:15:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.194 21:15:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.194 --rc genhtml_branch_coverage=1 00:08:24.194 --rc genhtml_function_coverage=1 00:08:24.194 --rc genhtml_legend=1 00:08:24.194 --rc geninfo_all_blocks=1 00:08:24.194 --rc geninfo_unexecuted_blocks=1 00:08:24.194 00:08:24.194 ' 00:08:24.194 21:15:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.194 --rc genhtml_branch_coverage=1 00:08:24.194 --rc genhtml_function_coverage=1 00:08:24.194 --rc genhtml_legend=1 00:08:24.194 --rc geninfo_all_blocks=1 00:08:24.194 --rc geninfo_unexecuted_blocks=1 00:08:24.194 00:08:24.194 ' 00:08:24.194 21:15:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.194 --rc genhtml_branch_coverage=1 00:08:24.194 --rc genhtml_function_coverage=1 00:08:24.194 --rc genhtml_legend=1 00:08:24.194 --rc geninfo_all_blocks=1 00:08:24.194 --rc geninfo_unexecuted_blocks=1 00:08:24.194 00:08:24.194 ' 00:08:24.194 21:15:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.194 --rc genhtml_branch_coverage=1 00:08:24.194 --rc genhtml_function_coverage=1 00:08:24.194 --rc genhtml_legend=1 00:08:24.194 --rc geninfo_all_blocks=1 00:08:24.194 --rc geninfo_unexecuted_blocks=1 00:08:24.194 00:08:24.194 ' 00:08:24.194 21:15:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.194 21:15:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.194 21:15:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.194 21:15:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.194 21:15:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.194 21:15:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.194 21:15:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.194 21:15:47 -- paths/export.sh@5 -- # export PATH 00:08:24.194 21:15:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.194 21:15:47 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.194 21:15:47 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.194 21:15:47 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.454 21:15:47 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.454 21:15:47 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:24.454 21:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.454 21:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.454 21:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 ************************************ 00:08:24.454 START TEST dd_invalid_arguments 00:08:24.454 ************************************ 00:08:24.454 21:15:47 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:08:24.454 21:15:47 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:24.454 21:15:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.454 21:15:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:24.454 21:15:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.454 21:15:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.454 21:15:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.454 21:15:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.454 21:15:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.454 21:15:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.454 21:15:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.454 21:15:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.454 21:15:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:24.454 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:24.454 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:24.454 options: 00:08:24.454 -c, --config JSON config file (default none) 00:08:24.454 --json JSON config file (default none) 00:08:24.454 --json-ignore-init-errors 00:08:24.454 don't exit on invalid config entry 00:08:24.454 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:24.454 -g, --single-file-segments 00:08:24.454 force creating just one hugetlbfs file 00:08:24.454 -h, --help show this usage 00:08:24.454 -i, --shm-id shared memory ID (optional) 00:08:24.454 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:24.454 --lcores lcore to CPU mapping list. The list is in the format: 00:08:24.454 [<,lcores[@CPUs]>...] 00:08:24.454 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:24.454 Within the group, '-' is used for range separator, 00:08:24.454 ',' is used for single number separator. 00:08:24.454 '( )' can be omitted for single element group, 00:08:24.454 '@' can be omitted if cpus and lcores have the same value 00:08:24.454 -n, --mem-channels channel number of memory channels used for DPDK 00:08:24.454 -p, --main-core main (primary) core for DPDK 00:08:24.454 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:24.454 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:24.454 --disable-cpumask-locks Disable CPU core lock files. 00:08:24.454 --silence-noticelog disable notice level logging to stderr 00:08:24.454 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:24.454 -u, --no-pci disable PCI access 00:08:24.454 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:24.454 --max-delay maximum reactor delay (in microseconds) 00:08:24.455 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:24.455 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:24.455 -R, --huge-unlink unlink huge files after initialization 00:08:24.455 -v, --version print SPDK version 00:08:24.455 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:24.455 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:24.455 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:24.455 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:24.455 Tracepoints vary in size and can use more than one trace entry. 00:08:24.455 --rpcs-allowed comma-separated list of permitted RPCS 00:08:24.455 --env-context Opaque context for use of the env implementation 00:08:24.455 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:24.455 --no-huge run without using hugepages 00:08:24.455 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:24.455 -e, --tpoint-group [:] 00:08:24.455 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:24.455 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:24.455 Groups and masks [2024-11-28 21:15:47.998785] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:24.455 can be combined (e.g. thread,bdev:0x1). 00:08:24.455 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:24.455 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:24.455 [--------- DD Options ---------] 00:08:24.455 --if Input file. Must specify either --if or --ib. 00:08:24.455 --ib Input bdev. Must specifier either --if or --ib 00:08:24.455 --of Output file. Must specify either --of or --ob. 00:08:24.455 --ob Output bdev. Must specify either --of or --ob. 00:08:24.455 --iflag Input file flags. 00:08:24.455 --oflag Output file flags. 00:08:24.455 --bs I/O unit size (default: 4096) 00:08:24.455 --qd Queue depth (default: 2) 00:08:24.455 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:24.455 --skip Skip this many I/O units at start of input. (default: 0) 00:08:24.455 --seek Skip this many I/O units at start of output. (default: 0) 00:08:24.455 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:24.455 --sparse Enable hole skipping in input target 00:08:24.455 Available iflag and oflag values: 00:08:24.455 append - append mode 00:08:24.455 direct - use direct I/O for data 00:08:24.455 directory - fail unless a directory 00:08:24.455 dsync - use synchronized I/O for data 00:08:24.455 noatime - do not update access time 00:08:24.455 noctty - do not assign controlling terminal from file 00:08:24.455 nofollow - do not follow symlinks 00:08:24.455 nonblock - use non-blocking I/O 00:08:24.455 sync - use synchronized I/O for data and metadata 00:08:24.455 21:15:48 -- common/autotest_common.sh@653 -- # es=2 00:08:24.455 21:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.455 21:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.455 21:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.455 00:08:24.455 real 0m0.067s 00:08:24.455 user 0m0.041s 00:08:24.455 sys 0m0.024s 00:08:24.455 21:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.455 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.455 ************************************ 00:08:24.455 END TEST dd_invalid_arguments 00:08:24.455 ************************************ 00:08:24.455 21:15:48 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:24.455 21:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.455 21:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.455 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.455 ************************************ 00:08:24.455 START TEST dd_double_input 00:08:24.455 ************************************ 00:08:24.455 21:15:48 -- common/autotest_common.sh@1114 -- # double_input 00:08:24.455 21:15:48 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:24.455 21:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.455 21:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:24.455 21:15:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.455 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.455 21:15:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.455 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.455 21:15:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.455 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.455 21:15:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.455 21:15:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.455 21:15:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:24.455 [2024-11-28 21:15:48.120061] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:24.455 21:15:48 -- common/autotest_common.sh@653 -- # es=22 00:08:24.455 21:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.455 21:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.455 ************************************ 00:08:24.455 END TEST dd_double_input 00:08:24.455 ************************************ 00:08:24.455 21:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.455 00:08:24.455 real 0m0.064s 00:08:24.455 user 0m0.039s 00:08:24.455 sys 0m0.025s 00:08:24.455 21:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.455 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.455 21:15:48 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:24.455 21:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.455 21:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.455 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.455 ************************************ 00:08:24.455 START TEST dd_double_output 00:08:24.455 ************************************ 00:08:24.455 21:15:48 -- common/autotest_common.sh@1114 -- # double_output 00:08:24.455 21:15:48 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:24.455 21:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.455 21:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:24.455 21:15:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.455 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.455 21:15:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.714 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.715 21:15:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:24.715 [2024-11-28 21:15:48.243445] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:24.715 21:15:48 -- common/autotest_common.sh@653 -- # es=22 00:08:24.715 21:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.715 21:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.715 21:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.715 00:08:24.715 real 0m0.066s 00:08:24.715 user 0m0.046s 00:08:24.715 sys 0m0.019s 00:08:24.715 21:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.715 ************************************ 00:08:24.715 END TEST dd_double_output 00:08:24.715 ************************************ 00:08:24.715 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 21:15:48 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:24.715 21:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.715 21:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.715 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 ************************************ 00:08:24.715 START TEST dd_no_input 00:08:24.715 ************************************ 00:08:24.715 21:15:48 -- common/autotest_common.sh@1114 -- # no_input 00:08:24.715 21:15:48 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:24.715 21:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.715 21:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:24.715 21:15:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.715 21:15:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:24.715 [2024-11-28 21:15:48.362242] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:24.715 21:15:48 -- common/autotest_common.sh@653 -- # es=22 00:08:24.715 21:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.715 21:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.715 21:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.715 00:08:24.715 real 0m0.066s 00:08:24.715 user 0m0.042s 00:08:24.715 sys 0m0.023s 00:08:24.715 21:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.715 ************************************ 00:08:24.715 END TEST dd_no_input 00:08:24.715 ************************************ 00:08:24.715 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 21:15:48 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:24.715 21:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.715 21:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.715 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 ************************************ 00:08:24.715 START TEST dd_no_output 00:08:24.715 ************************************ 00:08:24.715 21:15:48 -- common/autotest_common.sh@1114 -- # no_output 00:08:24.715 21:15:48 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.715 21:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.715 21:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.715 21:15:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.715 21:15:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.715 21:15:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.974 [2024-11-28 21:15:48.481600] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:24.974 21:15:48 -- common/autotest_common.sh@653 -- # es=22 00:08:24.974 21:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.974 21:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.974 ************************************ 00:08:24.974 END TEST dd_no_output 00:08:24.974 ************************************ 00:08:24.974 21:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.974 00:08:24.974 real 0m0.068s 00:08:24.974 user 0m0.044s 00:08:24.974 sys 0m0.022s 00:08:24.974 21:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.974 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 21:15:48 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:24.974 21:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.974 21:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.974 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 ************************************ 00:08:24.974 START TEST dd_wrong_blocksize 00:08:24.974 ************************************ 00:08:24.974 21:15:48 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:08:24.974 21:15:48 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:24.974 21:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.974 21:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:24.974 21:15:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.974 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.974 21:15:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.974 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.974 21:15:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.974 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.974 21:15:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.974 21:15:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.974 21:15:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:24.974 [2024-11-28 21:15:48.601360] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:24.974 21:15:48 -- common/autotest_common.sh@653 -- # es=22 00:08:24.974 ************************************ 00:08:24.974 END TEST dd_wrong_blocksize 00:08:24.974 ************************************ 00:08:24.974 21:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.974 21:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.974 21:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.974 00:08:24.974 real 0m0.065s 00:08:24.974 user 0m0.040s 00:08:24.974 sys 0m0.023s 00:08:24.974 21:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.974 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 21:15:48 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:24.974 21:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.975 21:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.975 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 ************************************ 00:08:24.975 START TEST dd_smaller_blocksize 00:08:24.975 ************************************ 00:08:24.975 21:15:48 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:08:24.975 21:15:48 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:24.975 21:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.975 21:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:24.975 21:15:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.975 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.975 21:15:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.975 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.975 21:15:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.975 21:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.975 21:15:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.975 21:15:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.975 21:15:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:25.234 [2024-11-28 21:15:48.726073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:25.234 [2024-11-28 21:15:48.726158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71309 ] 00:08:25.234 [2024-11-28 21:15:48.866068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.234 [2024-11-28 21:15:48.905449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.234 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:25.234 [2024-11-28 21:15:48.953547] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:25.234 [2024-11-28 21:15:48.953581] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.493 [2024-11-28 21:15:49.015848] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:25.493 21:15:49 -- common/autotest_common.sh@653 -- # es=244 00:08:25.493 21:15:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.493 21:15:49 -- common/autotest_common.sh@662 -- # es=116 00:08:25.493 21:15:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.493 21:15:49 -- common/autotest_common.sh@670 -- # es=1 00:08:25.493 21:15:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.493 00:08:25.493 real 0m0.414s 00:08:25.493 user 0m0.216s 00:08:25.493 sys 0m0.093s 00:08:25.493 21:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.493 ************************************ 00:08:25.493 END TEST dd_smaller_blocksize 00:08:25.493 ************************************ 00:08:25.493 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.493 21:15:49 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:25.493 21:15:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.493 21:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.493 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.493 ************************************ 00:08:25.493 START TEST dd_invalid_count 00:08:25.493 ************************************ 00:08:25.493 21:15:49 -- common/autotest_common.sh@1114 -- # invalid_count 00:08:25.493 21:15:49 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:25.493 21:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:08:25.493 21:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:25.493 21:15:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.493 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.493 21:15:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.493 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.493 21:15:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.493 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.493 21:15:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.493 21:15:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.493 21:15:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:25.493 [2024-11-28 21:15:49.189905] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:25.493 21:15:49 -- common/autotest_common.sh@653 -- # es=22 00:08:25.493 21:15:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.493 21:15:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.493 21:15:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.493 00:08:25.493 real 0m0.065s 00:08:25.493 user 0m0.045s 00:08:25.493 sys 0m0.019s 00:08:25.493 ************************************ 00:08:25.493 END TEST dd_invalid_count 00:08:25.493 ************************************ 00:08:25.493 21:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.493 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 21:15:49 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:25.752 21:15:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.752 21:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.752 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 ************************************ 00:08:25.752 START TEST dd_invalid_oflag 00:08:25.752 ************************************ 00:08:25.752 21:15:49 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:08:25.752 21:15:49 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:25.752 21:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:08:25.752 21:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:25.752 21:15:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.752 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.752 21:15:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.752 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.752 21:15:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.752 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.752 21:15:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.752 21:15:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.752 21:15:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:25.752 [2024-11-28 21:15:49.310221] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:25.752 21:15:49 -- common/autotest_common.sh@653 -- # es=22 00:08:25.752 21:15:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.752 21:15:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.752 ************************************ 00:08:25.752 END TEST dd_invalid_oflag 00:08:25.752 ************************************ 00:08:25.752 21:15:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.752 00:08:25.752 real 0m0.064s 00:08:25.752 user 0m0.043s 00:08:25.752 sys 0m0.020s 00:08:25.752 21:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.752 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 21:15:49 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:25.752 21:15:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.752 21:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.752 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 ************************************ 00:08:25.752 START TEST dd_invalid_iflag 00:08:25.752 ************************************ 00:08:25.753 21:15:49 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:08:25.753 21:15:49 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:25.753 21:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:08:25.753 21:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:25.753 21:15:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.753 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.753 21:15:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.753 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.753 21:15:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.753 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.753 21:15:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.753 21:15:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.753 21:15:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:25.753 [2024-11-28 21:15:49.429066] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:25.753 21:15:49 -- common/autotest_common.sh@653 -- # es=22 00:08:25.753 21:15:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.753 21:15:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.753 21:15:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.753 00:08:25.753 real 0m0.065s 00:08:25.753 user 0m0.038s 00:08:25.753 sys 0m0.027s 00:08:25.753 ************************************ 00:08:25.753 END TEST dd_invalid_iflag 00:08:25.753 ************************************ 00:08:25.753 21:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.753 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 21:15:49 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:25.753 21:15:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.753 21:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.753 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:26.012 ************************************ 00:08:26.012 START TEST dd_unknown_flag 00:08:26.012 ************************************ 00:08:26.012 21:15:49 -- common/autotest_common.sh@1114 -- # unknown_flag 00:08:26.012 21:15:49 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:26.012 21:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:08:26.012 21:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:26.012 21:15:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.012 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.012 21:15:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.012 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.012 21:15:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.012 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.012 21:15:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.012 21:15:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.012 21:15:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:26.012 [2024-11-28 21:15:49.548442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:26.012 [2024-11-28 21:15:49.548530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71401 ] 00:08:26.012 [2024-11-28 21:15:49.687823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.012 [2024-11-28 21:15:49.726696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.272 [2024-11-28 21:15:49.775681] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:26.272 [2024-11-28 21:15:49.775737] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:26.272 [2024-11-28 21:15:49.775751] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:26.272 [2024-11-28 21:15:49.775765] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.272 [2024-11-28 21:15:49.840093] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:26.272 21:15:49 -- common/autotest_common.sh@653 -- # es=236 00:08:26.272 21:15:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.272 21:15:49 -- common/autotest_common.sh@662 -- # es=108 00:08:26.272 21:15:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.272 21:15:49 -- common/autotest_common.sh@670 -- # es=1 00:08:26.272 21:15:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.272 00:08:26.272 real 0m0.413s 00:08:26.272 user 0m0.208s 00:08:26.272 sys 0m0.100s 00:08:26.272 ************************************ 00:08:26.272 END TEST dd_unknown_flag 00:08:26.272 ************************************ 00:08:26.272 21:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.272 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:26.272 21:15:49 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:26.272 21:15:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.272 21:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.272 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:08:26.272 ************************************ 00:08:26.272 START TEST dd_invalid_json 00:08:26.272 ************************************ 00:08:26.272 21:15:49 -- common/autotest_common.sh@1114 -- # invalid_json 00:08:26.272 21:15:49 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:26.272 21:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:08:26.272 21:15:49 -- dd/negative_dd.sh@95 -- # : 00:08:26.272 21:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:26.272 21:15:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.272 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.272 21:15:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.272 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.272 21:15:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.272 21:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.272 21:15:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.272 21:15:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.272 21:15:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:26.532 [2024-11-28 21:15:50.015868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:26.532 [2024-11-28 21:15:50.015978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71429 ] 00:08:26.532 [2024-11-28 21:15:50.158059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.532 [2024-11-28 21:15:50.197227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.532 [2024-11-28 21:15:50.197357] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:26.532 [2024-11-28 21:15:50.197380] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.532 [2024-11-28 21:15:50.197425] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:26.532 21:15:50 -- common/autotest_common.sh@653 -- # es=234 00:08:26.532 21:15:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.532 21:15:50 -- common/autotest_common.sh@662 -- # es=106 00:08:26.532 21:15:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.532 21:15:50 -- common/autotest_common.sh@670 -- # es=1 00:08:26.532 21:15:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.532 00:08:26.532 real 0m0.298s 00:08:26.532 user 0m0.134s 00:08:26.532 sys 0m0.063s 00:08:26.532 ************************************ 00:08:26.532 END TEST dd_invalid_json 00:08:26.532 ************************************ 00:08:26.532 21:15:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.532 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:26.790 ************************************ 00:08:26.791 END TEST spdk_dd_negative 00:08:26.791 ************************************ 00:08:26.791 00:08:26.791 real 0m2.560s 00:08:26.791 user 0m1.241s 00:08:26.791 sys 0m0.930s 00:08:26.791 21:15:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.791 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:26.791 00:08:26.791 real 1m1.277s 00:08:26.791 user 0m36.955s 00:08:26.791 sys 0m15.235s 00:08:26.791 21:15:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.791 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:26.791 ************************************ 00:08:26.791 END TEST spdk_dd 00:08:26.791 ************************************ 00:08:26.791 21:15:50 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:26.791 21:15:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.791 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:26.791 21:15:50 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:26.791 21:15:50 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:26.791 21:15:50 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:26.791 21:15:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:26.791 21:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.791 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:26.791 ************************************ 00:08:26.791 START TEST nvmf_tcp 00:08:26.791 ************************************ 00:08:26.791 21:15:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:26.791 * Looking for test storage... 00:08:26.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:26.791 21:15:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:26.791 21:15:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:26.791 21:15:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.050 21:15:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.050 21:15:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.050 21:15:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.050 21:15:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.050 21:15:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.050 21:15:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.050 21:15:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.050 21:15:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.050 21:15:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.050 21:15:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.050 21:15:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.050 21:15:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.050 21:15:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.050 21:15:50 -- scripts/common.sh@344 -- # : 1 00:08:27.050 21:15:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.050 21:15:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.050 21:15:50 -- scripts/common.sh@364 -- # decimal 1 00:08:27.050 21:15:50 -- scripts/common.sh@352 -- # local d=1 00:08:27.050 21:15:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.050 21:15:50 -- scripts/common.sh@354 -- # echo 1 00:08:27.050 21:15:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.050 21:15:50 -- scripts/common.sh@365 -- # decimal 2 00:08:27.050 21:15:50 -- scripts/common.sh@352 -- # local d=2 00:08:27.050 21:15:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.050 21:15:50 -- scripts/common.sh@354 -- # echo 2 00:08:27.050 21:15:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.050 21:15:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.050 21:15:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.050 21:15:50 -- scripts/common.sh@367 -- # return 0 00:08:27.050 21:15:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.050 21:15:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.050 --rc genhtml_branch_coverage=1 00:08:27.050 --rc genhtml_function_coverage=1 00:08:27.050 --rc genhtml_legend=1 00:08:27.050 --rc geninfo_all_blocks=1 00:08:27.050 --rc geninfo_unexecuted_blocks=1 00:08:27.050 00:08:27.050 ' 00:08:27.050 21:15:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.050 --rc genhtml_branch_coverage=1 00:08:27.050 --rc genhtml_function_coverage=1 00:08:27.050 --rc genhtml_legend=1 00:08:27.050 --rc geninfo_all_blocks=1 00:08:27.050 --rc geninfo_unexecuted_blocks=1 00:08:27.050 00:08:27.050 ' 00:08:27.050 21:15:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.050 --rc genhtml_branch_coverage=1 00:08:27.050 --rc genhtml_function_coverage=1 00:08:27.050 --rc genhtml_legend=1 00:08:27.050 --rc geninfo_all_blocks=1 00:08:27.050 --rc geninfo_unexecuted_blocks=1 00:08:27.050 00:08:27.050 ' 00:08:27.050 21:15:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.050 --rc genhtml_branch_coverage=1 00:08:27.050 --rc genhtml_function_coverage=1 00:08:27.050 --rc genhtml_legend=1 00:08:27.050 --rc geninfo_all_blocks=1 00:08:27.050 --rc geninfo_unexecuted_blocks=1 00:08:27.050 00:08:27.050 ' 00:08:27.050 21:15:50 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:27.050 21:15:50 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:27.050 21:15:50 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.050 21:15:50 -- nvmf/common.sh@7 -- # uname -s 00:08:27.050 21:15:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.050 21:15:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.050 21:15:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.050 21:15:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.050 21:15:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.050 21:15:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.050 21:15:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.050 21:15:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.050 21:15:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.050 21:15:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.050 21:15:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:27.050 21:15:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:27.050 21:15:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.050 21:15:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.050 21:15:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.050 21:15:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.050 21:15:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.050 21:15:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.050 21:15:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.050 21:15:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.050 21:15:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.050 21:15:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.050 21:15:50 -- paths/export.sh@5 -- # export PATH 00:08:27.050 21:15:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.050 21:15:50 -- nvmf/common.sh@46 -- # : 0 00:08:27.050 21:15:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:27.050 21:15:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:27.051 21:15:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:27.051 21:15:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.051 21:15:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.051 21:15:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:27.051 21:15:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:27.051 21:15:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:27.051 21:15:50 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:27.051 21:15:50 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:27.051 21:15:50 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:27.051 21:15:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.051 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:27.051 21:15:50 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:27.051 21:15:50 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:27.051 21:15:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:27.051 21:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.051 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:27.051 ************************************ 00:08:27.051 START TEST nvmf_host_management 00:08:27.051 ************************************ 00:08:27.051 21:15:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:27.051 * Looking for test storage... 00:08:27.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.051 21:15:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.051 21:15:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.051 21:15:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.310 21:15:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.310 21:15:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.310 21:15:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.310 21:15:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.310 21:15:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.310 21:15:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.310 21:15:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.310 21:15:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.310 21:15:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.310 21:15:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.310 21:15:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.310 21:15:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.310 21:15:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.310 21:15:50 -- scripts/common.sh@344 -- # : 1 00:08:27.310 21:15:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.310 21:15:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.310 21:15:50 -- scripts/common.sh@364 -- # decimal 1 00:08:27.310 21:15:50 -- scripts/common.sh@352 -- # local d=1 00:08:27.310 21:15:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.310 21:15:50 -- scripts/common.sh@354 -- # echo 1 00:08:27.310 21:15:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.310 21:15:50 -- scripts/common.sh@365 -- # decimal 2 00:08:27.310 21:15:50 -- scripts/common.sh@352 -- # local d=2 00:08:27.310 21:15:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.310 21:15:50 -- scripts/common.sh@354 -- # echo 2 00:08:27.310 21:15:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.310 21:15:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.310 21:15:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.310 21:15:50 -- scripts/common.sh@367 -- # return 0 00:08:27.310 21:15:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.310 21:15:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.310 --rc genhtml_branch_coverage=1 00:08:27.310 --rc genhtml_function_coverage=1 00:08:27.310 --rc genhtml_legend=1 00:08:27.310 --rc geninfo_all_blocks=1 00:08:27.310 --rc geninfo_unexecuted_blocks=1 00:08:27.310 00:08:27.310 ' 00:08:27.310 21:15:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.310 --rc genhtml_branch_coverage=1 00:08:27.310 --rc genhtml_function_coverage=1 00:08:27.310 --rc genhtml_legend=1 00:08:27.310 --rc geninfo_all_blocks=1 00:08:27.310 --rc geninfo_unexecuted_blocks=1 00:08:27.310 00:08:27.310 ' 00:08:27.310 21:15:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.310 --rc genhtml_branch_coverage=1 00:08:27.310 --rc genhtml_function_coverage=1 00:08:27.310 --rc genhtml_legend=1 00:08:27.310 --rc geninfo_all_blocks=1 00:08:27.310 --rc geninfo_unexecuted_blocks=1 00:08:27.310 00:08:27.310 ' 00:08:27.310 21:15:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.310 --rc genhtml_branch_coverage=1 00:08:27.310 --rc genhtml_function_coverage=1 00:08:27.310 --rc genhtml_legend=1 00:08:27.310 --rc geninfo_all_blocks=1 00:08:27.310 --rc geninfo_unexecuted_blocks=1 00:08:27.310 00:08:27.310 ' 00:08:27.310 21:15:50 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.310 21:15:50 -- nvmf/common.sh@7 -- # uname -s 00:08:27.310 21:15:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.310 21:15:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.310 21:15:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.310 21:15:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.310 21:15:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.310 21:15:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.310 21:15:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.310 21:15:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.310 21:15:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.310 21:15:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.310 21:15:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:27.310 21:15:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:27.310 21:15:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.310 21:15:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.310 21:15:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.310 21:15:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.310 21:15:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.310 21:15:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.310 21:15:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.310 21:15:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.311 21:15:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.311 21:15:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.311 21:15:50 -- paths/export.sh@5 -- # export PATH 00:08:27.311 21:15:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.311 21:15:50 -- nvmf/common.sh@46 -- # : 0 00:08:27.311 21:15:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:27.311 21:15:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:27.311 21:15:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:27.311 21:15:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.311 21:15:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.311 21:15:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:27.311 21:15:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:27.311 21:15:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:27.311 21:15:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.311 21:15:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.311 21:15:50 -- target/host_management.sh@104 -- # nvmftestinit 00:08:27.311 21:15:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:27.311 21:15:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.311 21:15:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:27.311 21:15:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:27.311 21:15:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:27.311 21:15:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.311 21:15:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.311 21:15:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.311 21:15:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:27.311 21:15:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:27.311 21:15:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:27.311 21:15:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:27.311 21:15:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:27.311 21:15:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:27.311 21:15:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.311 21:15:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.311 21:15:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.311 21:15:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:27.311 21:15:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.311 21:15:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.311 21:15:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.311 21:15:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.311 21:15:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.311 21:15:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.311 21:15:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.311 21:15:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.311 21:15:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:27.311 Cannot find device "nvmf_init_br" 00:08:27.311 21:15:50 -- nvmf/common.sh@153 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:27.311 Cannot find device "nvmf_tgt_br" 00:08:27.311 21:15:50 -- nvmf/common.sh@154 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.311 Cannot find device "nvmf_tgt_br2" 00:08:27.311 21:15:50 -- nvmf/common.sh@155 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:27.311 Cannot find device "nvmf_init_br" 00:08:27.311 21:15:50 -- nvmf/common.sh@156 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:27.311 Cannot find device "nvmf_tgt_br" 00:08:27.311 21:15:50 -- nvmf/common.sh@157 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:27.311 Cannot find device "nvmf_tgt_br2" 00:08:27.311 21:15:50 -- nvmf/common.sh@158 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:27.311 Cannot find device "nvmf_br" 00:08:27.311 21:15:50 -- nvmf/common.sh@159 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:27.311 Cannot find device "nvmf_init_if" 00:08:27.311 21:15:50 -- nvmf/common.sh@160 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.311 21:15:50 -- nvmf/common.sh@161 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.311 21:15:50 -- nvmf/common.sh@162 -- # true 00:08:27.311 21:15:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.311 21:15:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.311 21:15:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.311 21:15:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.311 21:15:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:27.311 21:15:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:27.311 21:15:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:27.570 21:15:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:27.570 21:15:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:27.570 21:15:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:27.570 21:15:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:27.570 21:15:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:27.570 21:15:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:27.570 21:15:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:27.570 21:15:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:27.570 21:15:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:27.570 21:15:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:27.570 21:15:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:27.570 21:15:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:27.570 21:15:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:27.570 21:15:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:27.570 21:15:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:27.570 21:15:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:27.570 21:15:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:27.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:27.570 00:08:27.570 --- 10.0.0.2 ping statistics --- 00:08:27.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.570 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:27.570 21:15:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:27.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:27.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:27.570 00:08:27.570 --- 10.0.0.3 ping statistics --- 00:08:27.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.570 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:27.570 21:15:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:27.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:27.570 00:08:27.570 --- 10.0.0.1 ping statistics --- 00:08:27.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.570 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:27.570 21:15:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.570 21:15:51 -- nvmf/common.sh@421 -- # return 0 00:08:27.570 21:15:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:27.570 21:15:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.570 21:15:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:27.570 21:15:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:27.570 21:15:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.570 21:15:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:27.570 21:15:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:27.570 21:15:51 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:27.570 21:15:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:27.570 21:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.570 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:27.570 ************************************ 00:08:27.570 START TEST nvmf_host_management 00:08:27.570 ************************************ 00:08:27.570 21:15:51 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:08:27.570 21:15:51 -- target/host_management.sh@69 -- # starttarget 00:08:27.570 21:15:51 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:27.570 21:15:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:27.570 21:15:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.570 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:27.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.570 21:15:51 -- nvmf/common.sh@469 -- # nvmfpid=71697 00:08:27.570 21:15:51 -- nvmf/common.sh@470 -- # waitforlisten 71697 00:08:27.570 21:15:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:27.570 21:15:51 -- common/autotest_common.sh@829 -- # '[' -z 71697 ']' 00:08:27.570 21:15:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.571 21:15:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.571 21:15:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.571 21:15:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.571 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:27.829 [2024-11-28 21:15:51.345179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.829 [2024-11-28 21:15:51.345425] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.829 [2024-11-28 21:15:51.479930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.829 [2024-11-28 21:15:51.521243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.829 [2024-11-28 21:15:51.521671] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.829 [2024-11-28 21:15:51.521839] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.829 [2024-11-28 21:15:51.521980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.829 [2024-11-28 21:15:51.522388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.829 [2024-11-28 21:15:51.522478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.829 [2024-11-28 21:15:51.522619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.829 [2024-11-28 21:15:51.522624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.088 21:15:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.088 21:15:51 -- common/autotest_common.sh@862 -- # return 0 00:08:28.088 21:15:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:28.088 21:15:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.088 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.088 21:15:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.088 21:15:51 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.088 21:15:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.088 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.088 [2024-11-28 21:15:51.679559] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.088 21:15:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.088 21:15:51 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:28.088 21:15:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.088 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.088 21:15:51 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:28.088 21:15:51 -- target/host_management.sh@23 -- # cat 00:08:28.088 21:15:51 -- target/host_management.sh@30 -- # rpc_cmd 00:08:28.088 21:15:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.088 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.088 Malloc0 00:08:28.088 [2024-11-28 21:15:51.755629] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.088 21:15:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.088 21:15:51 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:28.088 21:15:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.088 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.089 21:15:51 -- target/host_management.sh@73 -- # perfpid=71744 00:08:28.089 21:15:51 -- target/host_management.sh@74 -- # waitforlisten 71744 /var/tmp/bdevperf.sock 00:08:28.089 21:15:51 -- common/autotest_common.sh@829 -- # '[' -z 71744 ']' 00:08:28.089 21:15:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.089 21:15:51 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:28.089 21:15:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.089 21:15:51 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:28.089 21:15:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.089 21:15:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.089 21:15:51 -- nvmf/common.sh@520 -- # config=() 00:08:28.089 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.089 21:15:51 -- nvmf/common.sh@520 -- # local subsystem config 00:08:28.089 21:15:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:28.089 21:15:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:28.089 { 00:08:28.089 "params": { 00:08:28.089 "name": "Nvme$subsystem", 00:08:28.089 "trtype": "$TEST_TRANSPORT", 00:08:28.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.089 "adrfam": "ipv4", 00:08:28.089 "trsvcid": "$NVMF_PORT", 00:08:28.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.089 "hdgst": ${hdgst:-false}, 00:08:28.089 "ddgst": ${ddgst:-false} 00:08:28.089 }, 00:08:28.089 "method": "bdev_nvme_attach_controller" 00:08:28.089 } 00:08:28.089 EOF 00:08:28.089 )") 00:08:28.089 21:15:51 -- nvmf/common.sh@542 -- # cat 00:08:28.089 21:15:51 -- nvmf/common.sh@544 -- # jq . 00:08:28.089 21:15:51 -- nvmf/common.sh@545 -- # IFS=, 00:08:28.089 21:15:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:28.089 "params": { 00:08:28.089 "name": "Nvme0", 00:08:28.089 "trtype": "tcp", 00:08:28.089 "traddr": "10.0.0.2", 00:08:28.089 "adrfam": "ipv4", 00:08:28.089 "trsvcid": "4420", 00:08:28.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:28.089 "hdgst": false, 00:08:28.089 "ddgst": false 00:08:28.089 }, 00:08:28.089 "method": "bdev_nvme_attach_controller" 00:08:28.089 }' 00:08:28.347 [2024-11-28 21:15:51.855608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:28.347 [2024-11-28 21:15:51.855849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71744 ] 00:08:28.347 [2024-11-28 21:15:51.997816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.347 [2024-11-28 21:15:52.038453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.606 Running I/O for 10 seconds... 00:08:29.176 21:15:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.176 21:15:52 -- common/autotest_common.sh@862 -- # return 0 00:08:29.176 21:15:52 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:29.176 21:15:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.176 21:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:29.176 21:15:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.176 21:15:52 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.176 21:15:52 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:29.176 21:15:52 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:29.176 21:15:52 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:29.176 21:15:52 -- target/host_management.sh@52 -- # local ret=1 00:08:29.176 21:15:52 -- target/host_management.sh@53 -- # local i 00:08:29.176 21:15:52 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:29.176 21:15:52 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:29.176 21:15:52 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:29.176 21:15:52 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:29.176 21:15:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.176 21:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:29.176 21:15:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.176 21:15:52 -- target/host_management.sh@55 -- # read_io_count=1823 00:08:29.176 21:15:52 -- target/host_management.sh@58 -- # '[' 1823 -ge 100 ']' 00:08:29.176 21:15:52 -- target/host_management.sh@59 -- # ret=0 00:08:29.176 21:15:52 -- target/host_management.sh@60 -- # break 00:08:29.176 21:15:52 -- target/host_management.sh@64 -- # return 0 00:08:29.176 21:15:52 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.176 21:15:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.176 21:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:29.176 [2024-11-28 21:15:52.876665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.876998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23014f0 is same with the state(5) to be set 00:08:29.176 [2024-11-28 21:15:52.877186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.176 [2024-11-28 21:15:52.877413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.176 [2024-11-28 21:15:52.877422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.877994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.177 [2024-11-28 21:15:52.878225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.177 [2024-11-28 21:15:52.878234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.178 [2024-11-28 21:15:52.878540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.878550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264120 is same with the state(5) to be set 00:08:29.178 [2024-11-28 21:15:52.878599] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2264120 was disconnected and freed. reset controller. 00:08:29.178 [2024-11-28 21:15:52.879829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:29.178 task offset: 121088 on job bdev=Nvme0n1 fails 00:08:29.178 00:08:29.178 Latency(us) 00:08:29.178 [2024-11-28T21:15:52.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.178 [2024-11-28T21:15:52.921Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:29.178 [2024-11-28T21:15:52.921Z] Job: Nvme0n1 ended in about 0.70 seconds with error 00:08:29.178 Verification LBA range: start 0x0 length 0x400 00:08:29.178 Nvme0n1 : 0.70 2788.38 174.27 91.52 0.00 21846.37 6762.12 30980.65 00:08:29.178 [2024-11-28T21:15:52.921Z] =================================================================================================================== 00:08:29.178 [2024-11-28T21:15:52.921Z] Total : 2788.38 174.27 91.52 0.00 21846.37 6762.12 30980.65 00:08:29.178 [2024-11-28 21:15:52.882070] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.178 [2024-11-28 21:15:52.882107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22666a0 (9): Bad file descriptor 00:08:29.178 21:15:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.178 [2024-11-28 21:15:52.885971] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:29.178 [2024-11-28 21:15:52.886080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:29.178 [2024-11-28 21:15:52.886108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.178 [2024-11-28 21:15:52.886128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:29.178 [2024-11-28 21:15:52.886138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:29.178 [2024-11-28 21:15:52.886146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:29.178 [2024-11-28 21:15:52.886155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22666a0 00:08:29.178 [2024-11-28 21:15:52.886192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22666a0 (9): Bad file descriptor 00:08:29.178 [2024-11-28 21:15:52.886222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:29.178 [2024-11-28 21:15:52.886233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:29.178 [2024-11-28 21:15:52.886243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:29.178 [2024-11-28 21:15:52.886260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:29.178 21:15:52 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.178 21:15:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.178 21:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:29.178 21:15:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.178 21:15:52 -- target/host_management.sh@87 -- # sleep 1 00:08:30.556 21:15:53 -- target/host_management.sh@91 -- # kill -9 71744 00:08:30.556 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71744) - No such process 00:08:30.556 21:15:53 -- target/host_management.sh@91 -- # true 00:08:30.556 21:15:53 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:30.556 21:15:53 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:30.556 21:15:53 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:30.556 21:15:53 -- nvmf/common.sh@520 -- # config=() 00:08:30.556 21:15:53 -- nvmf/common.sh@520 -- # local subsystem config 00:08:30.556 21:15:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:30.556 21:15:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:30.556 { 00:08:30.556 "params": { 00:08:30.556 "name": "Nvme$subsystem", 00:08:30.556 "trtype": "$TEST_TRANSPORT", 00:08:30.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.556 "adrfam": "ipv4", 00:08:30.556 "trsvcid": "$NVMF_PORT", 00:08:30.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.556 "hdgst": ${hdgst:-false}, 00:08:30.556 "ddgst": ${ddgst:-false} 00:08:30.556 }, 00:08:30.556 "method": "bdev_nvme_attach_controller" 00:08:30.556 } 00:08:30.556 EOF 00:08:30.556 )") 00:08:30.556 21:15:53 -- nvmf/common.sh@542 -- # cat 00:08:30.556 21:15:53 -- nvmf/common.sh@544 -- # jq . 00:08:30.556 21:15:53 -- nvmf/common.sh@545 -- # IFS=, 00:08:30.556 21:15:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:30.556 "params": { 00:08:30.556 "name": "Nvme0", 00:08:30.556 "trtype": "tcp", 00:08:30.556 "traddr": "10.0.0.2", 00:08:30.556 "adrfam": "ipv4", 00:08:30.556 "trsvcid": "4420", 00:08:30.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.556 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:30.556 "hdgst": false, 00:08:30.556 "ddgst": false 00:08:30.556 }, 00:08:30.556 "method": "bdev_nvme_attach_controller" 00:08:30.556 }' 00:08:30.556 [2024-11-28 21:15:53.952579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:30.556 [2024-11-28 21:15:53.952679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71787 ] 00:08:30.556 [2024-11-28 21:15:54.098381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.556 [2024-11-28 21:15:54.138493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.556 Running I/O for 1 seconds... 00:08:31.933 00:08:31.933 Latency(us) 00:08:31.933 [2024-11-28T21:15:55.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.933 [2024-11-28T21:15:55.676Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:31.933 Verification LBA range: start 0x0 length 0x400 00:08:31.933 Nvme0n1 : 1.02 3035.22 189.70 0.00 0.00 20731.09 1385.19 28597.53 00:08:31.933 [2024-11-28T21:15:55.676Z] =================================================================================================================== 00:08:31.933 [2024-11-28T21:15:55.676Z] Total : 3035.22 189.70 0.00 0.00 20731.09 1385.19 28597.53 00:08:31.933 21:15:55 -- target/host_management.sh@101 -- # stoptarget 00:08:31.933 21:15:55 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:31.933 21:15:55 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:31.933 21:15:55 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:31.933 21:15:55 -- target/host_management.sh@40 -- # nvmftestfini 00:08:31.933 21:15:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:31.933 21:15:55 -- nvmf/common.sh@116 -- # sync 00:08:31.933 21:15:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:31.933 21:15:55 -- nvmf/common.sh@119 -- # set +e 00:08:31.933 21:15:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:31.933 21:15:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:31.933 rmmod nvme_tcp 00:08:31.933 rmmod nvme_fabrics 00:08:31.933 rmmod nvme_keyring 00:08:31.933 21:15:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:31.933 21:15:55 -- nvmf/common.sh@123 -- # set -e 00:08:31.933 21:15:55 -- nvmf/common.sh@124 -- # return 0 00:08:31.933 21:15:55 -- nvmf/common.sh@477 -- # '[' -n 71697 ']' 00:08:31.933 21:15:55 -- nvmf/common.sh@478 -- # killprocess 71697 00:08:31.933 21:15:55 -- common/autotest_common.sh@936 -- # '[' -z 71697 ']' 00:08:31.933 21:15:55 -- common/autotest_common.sh@940 -- # kill -0 71697 00:08:31.933 21:15:55 -- common/autotest_common.sh@941 -- # uname 00:08:31.933 21:15:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.933 21:15:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71697 00:08:31.933 killing process with pid 71697 00:08:31.933 21:15:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:31.933 21:15:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:31.933 21:15:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71697' 00:08:31.933 21:15:55 -- common/autotest_common.sh@955 -- # kill 71697 00:08:31.933 21:15:55 -- common/autotest_common.sh@960 -- # wait 71697 00:08:32.192 [2024-11-28 21:15:55.742665] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:32.192 21:15:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:32.192 21:15:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:32.192 21:15:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:32.192 21:15:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.192 21:15:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:32.192 21:15:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.192 21:15:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.192 21:15:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.192 21:15:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:32.192 00:08:32.192 real 0m4.505s 00:08:32.192 user 0m19.155s 00:08:32.192 sys 0m1.122s 00:08:32.192 ************************************ 00:08:32.192 END TEST nvmf_host_management 00:08:32.192 ************************************ 00:08:32.192 21:15:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.192 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:08:32.192 21:15:55 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:32.192 00:08:32.192 real 0m5.191s 00:08:32.192 user 0m19.364s 00:08:32.192 sys 0m1.369s 00:08:32.192 21:15:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.192 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:08:32.192 ************************************ 00:08:32.192 END TEST nvmf_host_management 00:08:32.192 ************************************ 00:08:32.192 21:15:55 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.192 21:15:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:32.192 21:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.192 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:08:32.192 ************************************ 00:08:32.192 START TEST nvmf_lvol 00:08:32.192 ************************************ 00:08:32.192 21:15:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.452 * Looking for test storage... 00:08:32.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.452 21:15:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:32.452 21:15:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:32.452 21:15:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:32.452 21:15:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:32.452 21:15:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:32.452 21:15:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:32.452 21:15:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:32.452 21:15:56 -- scripts/common.sh@335 -- # IFS=.-: 00:08:32.452 21:15:56 -- scripts/common.sh@335 -- # read -ra ver1 00:08:32.452 21:15:56 -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.452 21:15:56 -- scripts/common.sh@336 -- # read -ra ver2 00:08:32.452 21:15:56 -- scripts/common.sh@337 -- # local 'op=<' 00:08:32.452 21:15:56 -- scripts/common.sh@339 -- # ver1_l=2 00:08:32.452 21:15:56 -- scripts/common.sh@340 -- # ver2_l=1 00:08:32.452 21:15:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:32.452 21:15:56 -- scripts/common.sh@343 -- # case "$op" in 00:08:32.452 21:15:56 -- scripts/common.sh@344 -- # : 1 00:08:32.452 21:15:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:32.452 21:15:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.452 21:15:56 -- scripts/common.sh@364 -- # decimal 1 00:08:32.452 21:15:56 -- scripts/common.sh@352 -- # local d=1 00:08:32.452 21:15:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.452 21:15:56 -- scripts/common.sh@354 -- # echo 1 00:08:32.452 21:15:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:32.452 21:15:56 -- scripts/common.sh@365 -- # decimal 2 00:08:32.452 21:15:56 -- scripts/common.sh@352 -- # local d=2 00:08:32.452 21:15:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.452 21:15:56 -- scripts/common.sh@354 -- # echo 2 00:08:32.452 21:15:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:32.452 21:15:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:32.452 21:15:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:32.452 21:15:56 -- scripts/common.sh@367 -- # return 0 00:08:32.452 21:15:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.452 21:15:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:32.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.453 --rc genhtml_branch_coverage=1 00:08:32.453 --rc genhtml_function_coverage=1 00:08:32.453 --rc genhtml_legend=1 00:08:32.453 --rc geninfo_all_blocks=1 00:08:32.453 --rc geninfo_unexecuted_blocks=1 00:08:32.453 00:08:32.453 ' 00:08:32.453 21:15:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.453 --rc genhtml_branch_coverage=1 00:08:32.453 --rc genhtml_function_coverage=1 00:08:32.453 --rc genhtml_legend=1 00:08:32.453 --rc geninfo_all_blocks=1 00:08:32.453 --rc geninfo_unexecuted_blocks=1 00:08:32.453 00:08:32.453 ' 00:08:32.453 21:15:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.453 --rc genhtml_branch_coverage=1 00:08:32.453 --rc genhtml_function_coverage=1 00:08:32.453 --rc genhtml_legend=1 00:08:32.453 --rc geninfo_all_blocks=1 00:08:32.453 --rc geninfo_unexecuted_blocks=1 00:08:32.453 00:08:32.453 ' 00:08:32.453 21:15:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.453 --rc genhtml_branch_coverage=1 00:08:32.453 --rc genhtml_function_coverage=1 00:08:32.453 --rc genhtml_legend=1 00:08:32.453 --rc geninfo_all_blocks=1 00:08:32.453 --rc geninfo_unexecuted_blocks=1 00:08:32.453 00:08:32.453 ' 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.453 21:15:56 -- nvmf/common.sh@7 -- # uname -s 00:08:32.453 21:15:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.453 21:15:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.453 21:15:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.453 21:15:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.453 21:15:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.453 21:15:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.453 21:15:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.453 21:15:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.453 21:15:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.453 21:15:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.453 21:15:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:32.453 21:15:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:32.453 21:15:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.453 21:15:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.453 21:15:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.453 21:15:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.453 21:15:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.453 21:15:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.453 21:15:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.453 21:15:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.453 21:15:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.453 21:15:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.453 21:15:56 -- paths/export.sh@5 -- # export PATH 00:08:32.453 21:15:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.453 21:15:56 -- nvmf/common.sh@46 -- # : 0 00:08:32.453 21:15:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.453 21:15:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.453 21:15:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.453 21:15:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.453 21:15:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.453 21:15:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.453 21:15:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.453 21:15:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.453 21:15:56 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:32.453 21:15:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:32.453 21:15:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.453 21:15:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:32.453 21:15:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:32.453 21:15:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:32.453 21:15:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.453 21:15:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.453 21:15:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.453 21:15:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:32.453 21:15:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:32.453 21:15:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:32.453 21:15:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:32.453 21:15:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:32.453 21:15:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:32.453 21:15:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.453 21:15:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.453 21:15:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:32.453 21:15:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:32.453 21:15:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.453 21:15:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.453 21:15:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.453 21:15:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.453 21:15:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.453 21:15:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.453 21:15:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.453 21:15:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.453 21:15:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:32.453 21:15:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:32.453 Cannot find device "nvmf_tgt_br" 00:08:32.453 21:15:56 -- nvmf/common.sh@154 -- # true 00:08:32.453 21:15:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.453 Cannot find device "nvmf_tgt_br2" 00:08:32.453 21:15:56 -- nvmf/common.sh@155 -- # true 00:08:32.453 21:15:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:32.453 21:15:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:32.453 Cannot find device "nvmf_tgt_br" 00:08:32.453 21:15:56 -- nvmf/common.sh@157 -- # true 00:08:32.453 21:15:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:32.453 Cannot find device "nvmf_tgt_br2" 00:08:32.453 21:15:56 -- nvmf/common.sh@158 -- # true 00:08:32.453 21:15:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:32.713 21:15:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:32.713 21:15:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.713 21:15:56 -- nvmf/common.sh@161 -- # true 00:08:32.714 21:15:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.714 21:15:56 -- nvmf/common.sh@162 -- # true 00:08:32.714 21:15:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:32.714 21:15:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:32.714 21:15:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:32.714 21:15:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:32.714 21:15:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.714 21:15:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.714 21:15:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.714 21:15:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:32.714 21:15:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:32.714 21:15:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:32.714 21:15:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:32.714 21:15:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:32.714 21:15:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:32.714 21:15:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.714 21:15:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.714 21:15:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.714 21:15:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:32.714 21:15:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:32.714 21:15:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.714 21:15:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.714 21:15:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.714 21:15:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.714 21:15:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.714 21:15:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:32.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:08:32.714 00:08:32.714 --- 10.0.0.2 ping statistics --- 00:08:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.714 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:32.714 21:15:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:32.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:32.714 00:08:32.714 --- 10.0.0.3 ping statistics --- 00:08:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.714 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:32.714 21:15:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:32.714 00:08:32.714 --- 10.0.0.1 ping statistics --- 00:08:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.714 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:32.714 21:15:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.714 21:15:56 -- nvmf/common.sh@421 -- # return 0 00:08:32.714 21:15:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.714 21:15:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.714 21:15:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:32.714 21:15:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:32.714 21:15:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.714 21:15:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:32.714 21:15:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:32.714 21:15:56 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:32.714 21:15:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:32.714 21:15:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.714 21:15:56 -- common/autotest_common.sh@10 -- # set +x 00:08:32.714 21:15:56 -- nvmf/common.sh@469 -- # nvmfpid=72017 00:08:32.714 21:15:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:32.714 21:15:56 -- nvmf/common.sh@470 -- # waitforlisten 72017 00:08:32.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.714 21:15:56 -- common/autotest_common.sh@829 -- # '[' -z 72017 ']' 00:08:32.714 21:15:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.714 21:15:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.714 21:15:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.714 21:15:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.714 21:15:56 -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 [2024-11-28 21:15:56.459210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.974 [2024-11-28 21:15:56.459495] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.974 [2024-11-28 21:15:56.602432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.974 [2024-11-28 21:15:56.643881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:32.974 [2024-11-28 21:15:56.644335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.974 [2024-11-28 21:15:56.644489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.974 [2024-11-28 21:15:56.644650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.974 [2024-11-28 21:15:56.644890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.974 [2024-11-28 21:15:56.644952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.974 [2024-11-28 21:15:56.644955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.948 21:15:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.948 21:15:57 -- common/autotest_common.sh@862 -- # return 0 00:08:33.948 21:15:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:33.948 21:15:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.948 21:15:57 -- common/autotest_common.sh@10 -- # set +x 00:08:33.948 21:15:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.948 21:15:57 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:34.211 [2024-11-28 21:15:57.728424] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.211 21:15:57 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.470 21:15:58 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:34.470 21:15:58 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.728 21:15:58 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:34.728 21:15:58 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:34.987 21:15:58 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:35.246 21:15:58 -- target/nvmf_lvol.sh@29 -- # lvs=7830fe02-aebd-45c7-bf2f-1cb0fee5d7ff 00:08:35.246 21:15:58 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7830fe02-aebd-45c7-bf2f-1cb0fee5d7ff lvol 20 00:08:35.505 21:15:59 -- target/nvmf_lvol.sh@32 -- # lvol=d339bfe4-1d25-4bd5-9d34-9bcf75e8c1e6 00:08:35.505 21:15:59 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:35.765 21:15:59 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d339bfe4-1d25-4bd5-9d34-9bcf75e8c1e6 00:08:36.024 21:15:59 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:36.024 [2024-11-28 21:15:59.717730] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.024 21:15:59 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.284 21:16:00 -- target/nvmf_lvol.sh@42 -- # perf_pid=72098 00:08:36.284 21:16:00 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:36.284 21:16:00 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:37.664 21:16:01 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d339bfe4-1d25-4bd5-9d34-9bcf75e8c1e6 MY_SNAPSHOT 00:08:37.664 21:16:01 -- target/nvmf_lvol.sh@47 -- # snapshot=cd742146-26de-403e-a839-16f215d3862c 00:08:37.664 21:16:01 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d339bfe4-1d25-4bd5-9d34-9bcf75e8c1e6 30 00:08:37.923 21:16:01 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cd742146-26de-403e-a839-16f215d3862c MY_CLONE 00:08:38.182 21:16:01 -- target/nvmf_lvol.sh@49 -- # clone=139f4471-ae5a-40fe-bda6-d75763684acc 00:08:38.182 21:16:01 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 139f4471-ae5a-40fe-bda6-d75763684acc 00:08:38.749 21:16:02 -- target/nvmf_lvol.sh@53 -- # wait 72098 00:08:46.868 Initializing NVMe Controllers 00:08:46.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:46.868 Controller IO queue size 128, less than required. 00:08:46.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:46.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:46.868 Initialization complete. Launching workers. 00:08:46.868 ======================================================== 00:08:46.868 Latency(us) 00:08:46.868 Device Information : IOPS MiB/s Average min max 00:08:46.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10256.88 40.07 12491.29 2028.35 40647.17 00:08:46.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10247.88 40.03 12495.65 1806.67 72034.96 00:08:46.868 ======================================================== 00:08:46.868 Total : 20504.77 80.10 12493.47 1806.67 72034.96 00:08:46.868 00:08:46.868 21:16:10 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.868 21:16:10 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d339bfe4-1d25-4bd5-9d34-9bcf75e8c1e6 00:08:47.128 21:16:10 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7830fe02-aebd-45c7-bf2f-1cb0fee5d7ff 00:08:47.387 21:16:11 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:47.387 21:16:11 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:47.387 21:16:11 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:47.387 21:16:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:47.387 21:16:11 -- nvmf/common.sh@116 -- # sync 00:08:47.387 21:16:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:47.387 21:16:11 -- nvmf/common.sh@119 -- # set +e 00:08:47.387 21:16:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:47.387 21:16:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:47.387 rmmod nvme_tcp 00:08:47.387 rmmod nvme_fabrics 00:08:47.646 rmmod nvme_keyring 00:08:47.646 21:16:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:47.646 21:16:11 -- nvmf/common.sh@123 -- # set -e 00:08:47.646 21:16:11 -- nvmf/common.sh@124 -- # return 0 00:08:47.646 21:16:11 -- nvmf/common.sh@477 -- # '[' -n 72017 ']' 00:08:47.646 21:16:11 -- nvmf/common.sh@478 -- # killprocess 72017 00:08:47.646 21:16:11 -- common/autotest_common.sh@936 -- # '[' -z 72017 ']' 00:08:47.646 21:16:11 -- common/autotest_common.sh@940 -- # kill -0 72017 00:08:47.646 21:16:11 -- common/autotest_common.sh@941 -- # uname 00:08:47.646 21:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:47.646 21:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72017 00:08:47.646 killing process with pid 72017 00:08:47.646 21:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:47.646 21:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:47.646 21:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72017' 00:08:47.646 21:16:11 -- common/autotest_common.sh@955 -- # kill 72017 00:08:47.646 21:16:11 -- common/autotest_common.sh@960 -- # wait 72017 00:08:47.646 21:16:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:47.646 21:16:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:47.646 21:16:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:47.646 21:16:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.646 21:16:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:47.646 21:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.646 21:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.646 21:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.906 21:16:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:47.906 ************************************ 00:08:47.906 END TEST nvmf_lvol 00:08:47.906 ************************************ 00:08:47.906 00:08:47.906 real 0m15.531s 00:08:47.906 user 1m4.073s 00:08:47.906 sys 0m4.930s 00:08:47.906 21:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.906 21:16:11 -- common/autotest_common.sh@10 -- # set +x 00:08:47.906 21:16:11 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.906 21:16:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:47.906 21:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.906 21:16:11 -- common/autotest_common.sh@10 -- # set +x 00:08:47.906 ************************************ 00:08:47.906 START TEST nvmf_lvs_grow 00:08:47.906 ************************************ 00:08:47.906 21:16:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.906 * Looking for test storage... 00:08:47.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.906 21:16:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:47.906 21:16:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:47.906 21:16:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:47.906 21:16:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:47.906 21:16:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:47.906 21:16:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:47.906 21:16:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:47.906 21:16:11 -- scripts/common.sh@335 -- # IFS=.-: 00:08:47.906 21:16:11 -- scripts/common.sh@335 -- # read -ra ver1 00:08:47.906 21:16:11 -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.906 21:16:11 -- scripts/common.sh@336 -- # read -ra ver2 00:08:47.906 21:16:11 -- scripts/common.sh@337 -- # local 'op=<' 00:08:47.906 21:16:11 -- scripts/common.sh@339 -- # ver1_l=2 00:08:47.906 21:16:11 -- scripts/common.sh@340 -- # ver2_l=1 00:08:47.906 21:16:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:47.906 21:16:11 -- scripts/common.sh@343 -- # case "$op" in 00:08:47.906 21:16:11 -- scripts/common.sh@344 -- # : 1 00:08:47.906 21:16:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:47.906 21:16:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.906 21:16:11 -- scripts/common.sh@364 -- # decimal 1 00:08:47.906 21:16:11 -- scripts/common.sh@352 -- # local d=1 00:08:47.906 21:16:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.906 21:16:11 -- scripts/common.sh@354 -- # echo 1 00:08:47.906 21:16:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:48.166 21:16:11 -- scripts/common.sh@365 -- # decimal 2 00:08:48.166 21:16:11 -- scripts/common.sh@352 -- # local d=2 00:08:48.166 21:16:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.166 21:16:11 -- scripts/common.sh@354 -- # echo 2 00:08:48.166 21:16:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:48.166 21:16:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:48.166 21:16:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:48.166 21:16:11 -- scripts/common.sh@367 -- # return 0 00:08:48.166 21:16:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.166 21:16:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.166 --rc genhtml_branch_coverage=1 00:08:48.166 --rc genhtml_function_coverage=1 00:08:48.166 --rc genhtml_legend=1 00:08:48.166 --rc geninfo_all_blocks=1 00:08:48.166 --rc geninfo_unexecuted_blocks=1 00:08:48.166 00:08:48.166 ' 00:08:48.166 21:16:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.166 --rc genhtml_branch_coverage=1 00:08:48.166 --rc genhtml_function_coverage=1 00:08:48.166 --rc genhtml_legend=1 00:08:48.166 --rc geninfo_all_blocks=1 00:08:48.166 --rc geninfo_unexecuted_blocks=1 00:08:48.166 00:08:48.166 ' 00:08:48.166 21:16:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.166 --rc genhtml_branch_coverage=1 00:08:48.166 --rc genhtml_function_coverage=1 00:08:48.166 --rc genhtml_legend=1 00:08:48.166 --rc geninfo_all_blocks=1 00:08:48.166 --rc geninfo_unexecuted_blocks=1 00:08:48.166 00:08:48.166 ' 00:08:48.166 21:16:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.166 --rc genhtml_branch_coverage=1 00:08:48.166 --rc genhtml_function_coverage=1 00:08:48.166 --rc genhtml_legend=1 00:08:48.166 --rc geninfo_all_blocks=1 00:08:48.166 --rc geninfo_unexecuted_blocks=1 00:08:48.166 00:08:48.166 ' 00:08:48.166 21:16:11 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.166 21:16:11 -- nvmf/common.sh@7 -- # uname -s 00:08:48.166 21:16:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.166 21:16:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.166 21:16:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.166 21:16:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.166 21:16:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.166 21:16:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.166 21:16:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.166 21:16:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.166 21:16:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.166 21:16:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.166 21:16:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:48.166 21:16:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:08:48.166 21:16:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.166 21:16:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.166 21:16:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.166 21:16:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.166 21:16:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.166 21:16:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.166 21:16:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.166 21:16:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.166 21:16:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.167 21:16:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.167 21:16:11 -- paths/export.sh@5 -- # export PATH 00:08:48.167 21:16:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.167 21:16:11 -- nvmf/common.sh@46 -- # : 0 00:08:48.167 21:16:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.167 21:16:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.167 21:16:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.167 21:16:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.167 21:16:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.167 21:16:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.167 21:16:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.167 21:16:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.167 21:16:11 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.167 21:16:11 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.167 21:16:11 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:48.167 21:16:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:48.167 21:16:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.167 21:16:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:48.167 21:16:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:48.167 21:16:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:48.167 21:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.167 21:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.167 21:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.167 21:16:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:48.167 21:16:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:48.167 21:16:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:48.167 21:16:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:48.167 21:16:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:48.167 21:16:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:48.167 21:16:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.167 21:16:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.167 21:16:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:48.167 21:16:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:48.167 21:16:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.167 21:16:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.167 21:16:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.167 21:16:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.167 21:16:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.167 21:16:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.167 21:16:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.167 21:16:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.167 21:16:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:48.167 21:16:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:48.167 Cannot find device "nvmf_tgt_br" 00:08:48.167 21:16:11 -- nvmf/common.sh@154 -- # true 00:08:48.167 21:16:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.167 Cannot find device "nvmf_tgt_br2" 00:08:48.167 21:16:11 -- nvmf/common.sh@155 -- # true 00:08:48.167 21:16:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:48.167 21:16:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:48.167 Cannot find device "nvmf_tgt_br" 00:08:48.167 21:16:11 -- nvmf/common.sh@157 -- # true 00:08:48.167 21:16:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:48.167 Cannot find device "nvmf_tgt_br2" 00:08:48.167 21:16:11 -- nvmf/common.sh@158 -- # true 00:08:48.167 21:16:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:48.167 21:16:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:48.167 21:16:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.167 21:16:11 -- nvmf/common.sh@161 -- # true 00:08:48.167 21:16:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.167 21:16:11 -- nvmf/common.sh@162 -- # true 00:08:48.167 21:16:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.167 21:16:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.167 21:16:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.167 21:16:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.167 21:16:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.167 21:16:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.167 21:16:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.167 21:16:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:48.167 21:16:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:48.427 21:16:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:48.427 21:16:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:48.427 21:16:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:48.427 21:16:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:48.427 21:16:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.427 21:16:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.427 21:16:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.427 21:16:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:48.427 21:16:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:48.427 21:16:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.427 21:16:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.427 21:16:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.427 21:16:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.427 21:16:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.427 21:16:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:48.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:48.427 00:08:48.427 --- 10.0.0.2 ping statistics --- 00:08:48.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.427 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:48.427 21:16:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:48.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:08:48.427 00:08:48.427 --- 10.0.0.3 ping statistics --- 00:08:48.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.427 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:48.427 21:16:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:48.427 00:08:48.427 --- 10.0.0.1 ping statistics --- 00:08:48.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.427 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:48.427 21:16:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.427 21:16:11 -- nvmf/common.sh@421 -- # return 0 00:08:48.427 21:16:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:48.427 21:16:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.427 21:16:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:48.427 21:16:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:48.427 21:16:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.427 21:16:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:48.427 21:16:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:48.427 21:16:12 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:48.427 21:16:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:48.427 21:16:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.427 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 21:16:12 -- nvmf/common.sh@469 -- # nvmfpid=72422 00:08:48.427 21:16:12 -- nvmf/common.sh@470 -- # waitforlisten 72422 00:08:48.427 21:16:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.427 21:16:12 -- common/autotest_common.sh@829 -- # '[' -z 72422 ']' 00:08:48.427 21:16:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.427 21:16:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.427 21:16:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.427 21:16:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.427 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 [2024-11-28 21:16:12.070884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:48.427 [2024-11-28 21:16:12.070966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.687 [2024-11-28 21:16:12.210253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.687 [2024-11-28 21:16:12.245271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.687 [2024-11-28 21:16:12.245442] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.687 [2024-11-28 21:16:12.245454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.687 [2024-11-28 21:16:12.245461] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.687 [2024-11-28 21:16:12.245483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.626 21:16:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.626 21:16:13 -- common/autotest_common.sh@862 -- # return 0 00:08:49.626 21:16:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:49.626 21:16:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.626 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:08:49.626 21:16:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.626 [2024-11-28 21:16:13.330344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:49.626 21:16:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.626 21:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.626 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:08:49.626 ************************************ 00:08:49.626 START TEST lvs_grow_clean 00:08:49.626 ************************************ 00:08:49.626 21:16:13 -- common/autotest_common.sh@1114 -- # lvs_grow 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.626 21:16:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.885 21:16:13 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.885 21:16:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.886 21:16:13 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.456 21:16:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4291411f-25b1-43f7-9315-de6bf33f2a77 00:08:50.456 21:16:13 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:08:50.456 21:16:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.456 21:16:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.456 21:16:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.456 21:16:14 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4291411f-25b1-43f7-9315-de6bf33f2a77 lvol 150 00:08:51.025 21:16:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6fdc4ebf-8a40-4669-99f1-50ff7fea7dab 00:08:51.025 21:16:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.025 21:16:14 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:51.025 [2024-11-28 21:16:14.763048] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:51.025 [2024-11-28 21:16:14.763116] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:51.025 true 00:08:51.285 21:16:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:51.285 21:16:14 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:08:51.285 21:16:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:51.285 21:16:14 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.546 21:16:15 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fdc4ebf-8a40-4669-99f1-50ff7fea7dab 00:08:51.805 21:16:15 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:52.065 [2024-11-28 21:16:15.695668] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.065 21:16:15 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.324 21:16:15 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:52.324 21:16:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72510 00:08:52.324 21:16:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.324 21:16:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72510 /var/tmp/bdevperf.sock 00:08:52.324 21:16:15 -- common/autotest_common.sh@829 -- # '[' -z 72510 ']' 00:08:52.324 21:16:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.324 21:16:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.324 21:16:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.324 21:16:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.324 21:16:15 -- common/autotest_common.sh@10 -- # set +x 00:08:52.324 [2024-11-28 21:16:15.965443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:52.324 [2024-11-28 21:16:15.965506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72510 ] 00:08:52.583 [2024-11-28 21:16:16.102091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.583 [2024-11-28 21:16:16.142143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.520 21:16:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.520 21:16:16 -- common/autotest_common.sh@862 -- # return 0 00:08:53.520 21:16:16 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:53.520 Nvme0n1 00:08:53.780 21:16:17 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:53.780 [ 00:08:53.780 { 00:08:53.780 "name": "Nvme0n1", 00:08:53.780 "aliases": [ 00:08:53.780 "6fdc4ebf-8a40-4669-99f1-50ff7fea7dab" 00:08:53.780 ], 00:08:53.780 "product_name": "NVMe disk", 00:08:53.780 "block_size": 4096, 00:08:53.780 "num_blocks": 38912, 00:08:53.780 "uuid": "6fdc4ebf-8a40-4669-99f1-50ff7fea7dab", 00:08:53.780 "assigned_rate_limits": { 00:08:53.780 "rw_ios_per_sec": 0, 00:08:53.780 "rw_mbytes_per_sec": 0, 00:08:53.780 "r_mbytes_per_sec": 0, 00:08:53.780 "w_mbytes_per_sec": 0 00:08:53.780 }, 00:08:53.780 "claimed": false, 00:08:53.780 "zoned": false, 00:08:53.780 "supported_io_types": { 00:08:53.780 "read": true, 00:08:53.780 "write": true, 00:08:53.780 "unmap": true, 00:08:53.780 "write_zeroes": true, 00:08:53.780 "flush": true, 00:08:53.780 "reset": true, 00:08:53.780 "compare": true, 00:08:53.780 "compare_and_write": true, 00:08:53.780 "abort": true, 00:08:53.780 "nvme_admin": true, 00:08:53.780 "nvme_io": true 00:08:53.780 }, 00:08:53.780 "driver_specific": { 00:08:53.780 "nvme": [ 00:08:53.780 { 00:08:53.780 "trid": { 00:08:53.780 "trtype": "TCP", 00:08:53.780 "adrfam": "IPv4", 00:08:53.780 "traddr": "10.0.0.2", 00:08:53.780 "trsvcid": "4420", 00:08:53.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:53.780 }, 00:08:53.780 "ctrlr_data": { 00:08:53.780 "cntlid": 1, 00:08:53.780 "vendor_id": "0x8086", 00:08:53.780 "model_number": "SPDK bdev Controller", 00:08:53.780 "serial_number": "SPDK0", 00:08:53.780 "firmware_revision": "24.01.1", 00:08:53.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.780 "oacs": { 00:08:53.780 "security": 0, 00:08:53.780 "format": 0, 00:08:53.780 "firmware": 0, 00:08:53.780 "ns_manage": 0 00:08:53.780 }, 00:08:53.780 "multi_ctrlr": true, 00:08:53.780 "ana_reporting": false 00:08:53.780 }, 00:08:53.780 "vs": { 00:08:53.780 "nvme_version": "1.3" 00:08:53.780 }, 00:08:53.780 "ns_data": { 00:08:53.780 "id": 1, 00:08:53.780 "can_share": true 00:08:53.780 } 00:08:53.780 } 00:08:53.780 ], 00:08:53.780 "mp_policy": "active_passive" 00:08:53.780 } 00:08:53.780 } 00:08:53.780 ] 00:08:54.067 21:16:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72534 00:08:54.067 21:16:17 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.067 21:16:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.067 Running I/O for 10 seconds... 00:08:55.029 Latency(us) 00:08:55.029 [2024-11-28T21:16:18.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.029 [2024-11-28T21:16:18.772Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.029 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:55.029 [2024-11-28T21:16:18.772Z] =================================================================================================================== 00:08:55.029 [2024-11-28T21:16:18.772Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:55.029 00:08:55.967 21:16:19 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:08:55.967 [2024-11-28T21:16:19.710Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.967 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:55.967 [2024-11-28T21:16:19.710Z] =================================================================================================================== 00:08:55.967 [2024-11-28T21:16:19.710Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:55.967 00:08:56.226 true 00:08:56.226 21:16:19 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:08:56.226 21:16:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:56.485 21:16:20 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:56.485 21:16:20 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:56.485 21:16:20 -- target/nvmf_lvs_grow.sh@65 -- # wait 72534 00:08:57.053 [2024-11-28T21:16:20.796Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.053 Nvme0n1 : 3.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:57.053 [2024-11-28T21:16:20.796Z] =================================================================================================================== 00:08:57.053 [2024-11-28T21:16:20.796Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:57.053 00:08:57.989 [2024-11-28T21:16:21.732Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.989 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:08:57.989 [2024-11-28T21:16:21.733Z] =================================================================================================================== 00:08:57.990 [2024-11-28T21:16:21.733Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:08:57.990 00:08:58.925 [2024-11-28T21:16:22.668Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.925 Nvme0n1 : 5.00 6527.80 25.50 0.00 0.00 0.00 0.00 0.00 00:08:58.925 [2024-11-28T21:16:22.668Z] =================================================================================================================== 00:08:58.925 [2024-11-28T21:16:22.668Z] Total : 6527.80 25.50 0.00 0.00 0.00 0.00 0.00 00:08:58.925 00:09:00.303 [2024-11-28T21:16:24.046Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.303 Nvme0n1 : 6.00 6498.17 25.38 0.00 0.00 0.00 0.00 0.00 00:09:00.303 [2024-11-28T21:16:24.046Z] =================================================================================================================== 00:09:00.303 [2024-11-28T21:16:24.046Z] Total : 6498.17 25.38 0.00 0.00 0.00 0.00 0.00 00:09:00.303 00:09:01.239 [2024-11-28T21:16:24.982Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.239 Nvme0n1 : 7.00 6513.29 25.44 0.00 0.00 0.00 0.00 0.00 00:09:01.239 [2024-11-28T21:16:24.982Z] =================================================================================================================== 00:09:01.239 [2024-11-28T21:16:24.982Z] Total : 6513.29 25.44 0.00 0.00 0.00 0.00 0.00 00:09:01.239 00:09:02.174 [2024-11-28T21:16:25.917Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.174 Nvme0n1 : 8.00 6524.62 25.49 0.00 0.00 0.00 0.00 0.00 00:09:02.174 [2024-11-28T21:16:25.917Z] =================================================================================================================== 00:09:02.174 [2024-11-28T21:16:25.917Z] Total : 6524.62 25.49 0.00 0.00 0.00 0.00 0.00 00:09:02.174 00:09:03.110 [2024-11-28T21:16:26.853Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.110 Nvme0n1 : 9.00 6505.22 25.41 0.00 0.00 0.00 0.00 0.00 00:09:03.110 [2024-11-28T21:16:26.853Z] =================================================================================================================== 00:09:03.110 [2024-11-28T21:16:26.853Z] Total : 6505.22 25.41 0.00 0.00 0.00 0.00 0.00 00:09:03.111 00:09:04.048 [2024-11-28T21:16:27.791Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.048 Nvme0n1 : 10.00 6438.90 25.15 0.00 0.00 0.00 0.00 0.00 00:09:04.048 [2024-11-28T21:16:27.791Z] =================================================================================================================== 00:09:04.048 [2024-11-28T21:16:27.791Z] Total : 6438.90 25.15 0.00 0.00 0.00 0.00 0.00 00:09:04.048 00:09:04.048 00:09:04.048 Latency(us) 00:09:04.048 [2024-11-28T21:16:27.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.048 [2024-11-28T21:16:27.791Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.048 Nvme0n1 : 10.02 6439.41 25.15 0.00 0.00 19871.67 16562.73 46709.29 00:09:04.048 [2024-11-28T21:16:27.791Z] =================================================================================================================== 00:09:04.048 [2024-11-28T21:16:27.791Z] Total : 6439.41 25.15 0.00 0.00 19871.67 16562.73 46709.29 00:09:04.048 0 00:09:04.048 21:16:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72510 00:09:04.048 21:16:27 -- common/autotest_common.sh@936 -- # '[' -z 72510 ']' 00:09:04.048 21:16:27 -- common/autotest_common.sh@940 -- # kill -0 72510 00:09:04.048 21:16:27 -- common/autotest_common.sh@941 -- # uname 00:09:04.048 21:16:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.048 21:16:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72510 00:09:04.048 21:16:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:04.048 21:16:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:04.048 killing process with pid 72510 00:09:04.048 21:16:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72510' 00:09:04.048 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.048 00:09:04.048 Latency(us) 00:09:04.048 [2024-11-28T21:16:27.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.048 [2024-11-28T21:16:27.791Z] =================================================================================================================== 00:09:04.048 [2024-11-28T21:16:27.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.048 21:16:27 -- common/autotest_common.sh@955 -- # kill 72510 00:09:04.048 21:16:27 -- common/autotest_common.sh@960 -- # wait 72510 00:09:04.307 21:16:27 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.567 21:16:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:04.567 21:16:28 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:04.826 21:16:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:04.826 21:16:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:04.826 21:16:28 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.086 [2024-11-28 21:16:28.743713] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.086 21:16:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:05.086 21:16:28 -- common/autotest_common.sh@650 -- # local es=0 00:09:05.086 21:16:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:05.086 21:16:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.086 21:16:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.086 21:16:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.086 21:16:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.086 21:16:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.086 21:16:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.086 21:16:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.086 21:16:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:05.086 21:16:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:05.345 request: 00:09:05.345 { 00:09:05.345 "uuid": "4291411f-25b1-43f7-9315-de6bf33f2a77", 00:09:05.345 "method": "bdev_lvol_get_lvstores", 00:09:05.345 "req_id": 1 00:09:05.345 } 00:09:05.345 Got JSON-RPC error response 00:09:05.345 response: 00:09:05.345 { 00:09:05.345 "code": -19, 00:09:05.345 "message": "No such device" 00:09:05.345 } 00:09:05.604 21:16:29 -- common/autotest_common.sh@653 -- # es=1 00:09:05.604 21:16:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:05.604 21:16:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:05.604 21:16:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:05.604 21:16:29 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:05.863 aio_bdev 00:09:05.863 21:16:29 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6fdc4ebf-8a40-4669-99f1-50ff7fea7dab 00:09:05.863 21:16:29 -- common/autotest_common.sh@897 -- # local bdev_name=6fdc4ebf-8a40-4669-99f1-50ff7fea7dab 00:09:05.863 21:16:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:05.863 21:16:29 -- common/autotest_common.sh@899 -- # local i 00:09:05.863 21:16:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:05.863 21:16:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:05.863 21:16:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.122 21:16:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fdc4ebf-8a40-4669-99f1-50ff7fea7dab -t 2000 00:09:06.381 [ 00:09:06.381 { 00:09:06.381 "name": "6fdc4ebf-8a40-4669-99f1-50ff7fea7dab", 00:09:06.381 "aliases": [ 00:09:06.381 "lvs/lvol" 00:09:06.381 ], 00:09:06.381 "product_name": "Logical Volume", 00:09:06.381 "block_size": 4096, 00:09:06.381 "num_blocks": 38912, 00:09:06.381 "uuid": "6fdc4ebf-8a40-4669-99f1-50ff7fea7dab", 00:09:06.381 "assigned_rate_limits": { 00:09:06.381 "rw_ios_per_sec": 0, 00:09:06.381 "rw_mbytes_per_sec": 0, 00:09:06.381 "r_mbytes_per_sec": 0, 00:09:06.381 "w_mbytes_per_sec": 0 00:09:06.381 }, 00:09:06.381 "claimed": false, 00:09:06.381 "zoned": false, 00:09:06.381 "supported_io_types": { 00:09:06.381 "read": true, 00:09:06.381 "write": true, 00:09:06.381 "unmap": true, 00:09:06.381 "write_zeroes": true, 00:09:06.381 "flush": false, 00:09:06.381 "reset": true, 00:09:06.381 "compare": false, 00:09:06.381 "compare_and_write": false, 00:09:06.381 "abort": false, 00:09:06.381 "nvme_admin": false, 00:09:06.381 "nvme_io": false 00:09:06.381 }, 00:09:06.381 "driver_specific": { 00:09:06.381 "lvol": { 00:09:06.381 "lvol_store_uuid": "4291411f-25b1-43f7-9315-de6bf33f2a77", 00:09:06.381 "base_bdev": "aio_bdev", 00:09:06.381 "thin_provision": false, 00:09:06.381 "snapshot": false, 00:09:06.381 "clone": false, 00:09:06.381 "esnap_clone": false 00:09:06.381 } 00:09:06.381 } 00:09:06.381 } 00:09:06.381 ] 00:09:06.381 21:16:29 -- common/autotest_common.sh@905 -- # return 0 00:09:06.381 21:16:29 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:06.381 21:16:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:06.641 21:16:30 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:06.641 21:16:30 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:06.641 21:16:30 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:06.900 21:16:30 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:06.900 21:16:30 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6fdc4ebf-8a40-4669-99f1-50ff7fea7dab 00:09:07.161 21:16:30 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4291411f-25b1-43f7-9315-de6bf33f2a77 00:09:07.421 21:16:31 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.681 21:16:31 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.940 00:09:07.940 real 0m18.265s 00:09:07.940 user 0m17.398s 00:09:07.940 sys 0m2.366s 00:09:07.940 21:16:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.940 ************************************ 00:09:07.940 END TEST lvs_grow_clean 00:09:07.940 ************************************ 00:09:07.940 21:16:31 -- common/autotest_common.sh@10 -- # set +x 00:09:07.940 21:16:31 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:07.940 21:16:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:07.940 21:16:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.940 21:16:31 -- common/autotest_common.sh@10 -- # set +x 00:09:07.940 ************************************ 00:09:07.940 START TEST lvs_grow_dirty 00:09:07.940 ************************************ 00:09:07.940 21:16:31 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:07.940 21:16:31 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:07.940 21:16:31 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:07.940 21:16:31 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:07.940 21:16:31 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:07.940 21:16:31 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.199 21:16:31 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.199 21:16:31 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.199 21:16:31 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.199 21:16:31 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.459 21:16:32 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.459 21:16:32 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.718 21:16:32 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:08.718 21:16:32 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:08.718 21:16:32 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:08.977 21:16:32 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:08.977 21:16:32 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:08.977 21:16:32 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 lvol 150 00:09:09.235 21:16:32 -- target/nvmf_lvs_grow.sh@33 -- # lvol=90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:09.235 21:16:32 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.235 21:16:32 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.493 [2024-11-28 21:16:33.016146] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.493 [2024-11-28 21:16:33.016233] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.493 true 00:09:09.493 21:16:33 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:09.493 21:16:33 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.750 21:16:33 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.750 21:16:33 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:09.750 21:16:33 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:10.007 21:16:33 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.265 21:16:33 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.576 21:16:34 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72781 00:09:10.576 21:16:34 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.576 21:16:34 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.576 21:16:34 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72781 /var/tmp/bdevperf.sock 00:09:10.576 21:16:34 -- common/autotest_common.sh@829 -- # '[' -z 72781 ']' 00:09:10.576 21:16:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.576 21:16:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.576 21:16:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.576 21:16:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.576 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:09:10.576 [2024-11-28 21:16:34.291574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:10.576 [2024-11-28 21:16:34.291665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:09:10.833 [2024-11-28 21:16:34.432410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.833 [2024-11-28 21:16:34.474100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.767 21:16:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.767 21:16:35 -- common/autotest_common.sh@862 -- # return 0 00:09:11.767 21:16:35 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.026 Nvme0n1 00:09:12.026 21:16:35 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.284 [ 00:09:12.284 { 00:09:12.284 "name": "Nvme0n1", 00:09:12.284 "aliases": [ 00:09:12.284 "90a21474-0e72-4b99-a3e6-7b175814f7f9" 00:09:12.285 ], 00:09:12.285 "product_name": "NVMe disk", 00:09:12.285 "block_size": 4096, 00:09:12.285 "num_blocks": 38912, 00:09:12.285 "uuid": "90a21474-0e72-4b99-a3e6-7b175814f7f9", 00:09:12.285 "assigned_rate_limits": { 00:09:12.285 "rw_ios_per_sec": 0, 00:09:12.285 "rw_mbytes_per_sec": 0, 00:09:12.285 "r_mbytes_per_sec": 0, 00:09:12.285 "w_mbytes_per_sec": 0 00:09:12.285 }, 00:09:12.285 "claimed": false, 00:09:12.285 "zoned": false, 00:09:12.285 "supported_io_types": { 00:09:12.285 "read": true, 00:09:12.285 "write": true, 00:09:12.285 "unmap": true, 00:09:12.285 "write_zeroes": true, 00:09:12.285 "flush": true, 00:09:12.285 "reset": true, 00:09:12.285 "compare": true, 00:09:12.285 "compare_and_write": true, 00:09:12.285 "abort": true, 00:09:12.285 "nvme_admin": true, 00:09:12.285 "nvme_io": true 00:09:12.285 }, 00:09:12.285 "driver_specific": { 00:09:12.285 "nvme": [ 00:09:12.285 { 00:09:12.285 "trid": { 00:09:12.285 "trtype": "TCP", 00:09:12.285 "adrfam": "IPv4", 00:09:12.285 "traddr": "10.0.0.2", 00:09:12.285 "trsvcid": "4420", 00:09:12.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.285 }, 00:09:12.285 "ctrlr_data": { 00:09:12.285 "cntlid": 1, 00:09:12.285 "vendor_id": "0x8086", 00:09:12.285 "model_number": "SPDK bdev Controller", 00:09:12.285 "serial_number": "SPDK0", 00:09:12.285 "firmware_revision": "24.01.1", 00:09:12.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.285 "oacs": { 00:09:12.285 "security": 0, 00:09:12.285 "format": 0, 00:09:12.285 "firmware": 0, 00:09:12.285 "ns_manage": 0 00:09:12.285 }, 00:09:12.285 "multi_ctrlr": true, 00:09:12.285 "ana_reporting": false 00:09:12.285 }, 00:09:12.285 "vs": { 00:09:12.285 "nvme_version": "1.3" 00:09:12.285 }, 00:09:12.285 "ns_data": { 00:09:12.285 "id": 1, 00:09:12.285 "can_share": true 00:09:12.285 } 00:09:12.285 } 00:09:12.285 ], 00:09:12.285 "mp_policy": "active_passive" 00:09:12.285 } 00:09:12.285 } 00:09:12.285 ] 00:09:12.285 21:16:35 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72805 00:09:12.285 21:16:35 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.285 21:16:35 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.285 Running I/O for 10 seconds... 00:09:13.222 Latency(us) 00:09:13.222 [2024-11-28T21:16:36.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.222 [2024-11-28T21:16:36.965Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.222 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:13.222 [2024-11-28T21:16:36.965Z] =================================================================================================================== 00:09:13.222 [2024-11-28T21:16:36.965Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:13.222 00:09:14.158 21:16:37 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:14.416 [2024-11-28T21:16:38.159Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.416 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:14.416 [2024-11-28T21:16:38.159Z] =================================================================================================================== 00:09:14.416 [2024-11-28T21:16:38.159Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:14.416 00:09:14.416 true 00:09:14.416 21:16:38 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:14.416 21:16:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.984 21:16:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.984 21:16:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.984 21:16:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 72805 00:09:15.242 [2024-11-28T21:16:38.985Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.242 Nvme0n1 : 3.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:15.242 [2024-11-28T21:16:38.985Z] =================================================================================================================== 00:09:15.242 [2024-11-28T21:16:38.985Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:15.242 00:09:16.618 [2024-11-28T21:16:40.361Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.618 Nvme0n1 : 4.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:16.618 [2024-11-28T21:16:40.361Z] =================================================================================================================== 00:09:16.618 [2024-11-28T21:16:40.361Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:16.618 00:09:17.555 [2024-11-28T21:16:41.298Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.555 Nvme0n1 : 5.00 6615.20 25.84 0.00 0.00 0.00 0.00 0.00 00:09:17.555 [2024-11-28T21:16:41.298Z] =================================================================================================================== 00:09:17.555 [2024-11-28T21:16:41.298Z] Total : 6615.20 25.84 0.00 0.00 0.00 0.00 0.00 00:09:17.555 00:09:18.492 [2024-11-28T21:16:42.235Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.492 Nvme0n1 : 6.00 6549.83 25.59 0.00 0.00 0.00 0.00 0.00 00:09:18.492 [2024-11-28T21:16:42.235Z] =================================================================================================================== 00:09:18.492 [2024-11-28T21:16:42.235Z] Total : 6549.83 25.59 0.00 0.00 0.00 0.00 0.00 00:09:18.492 00:09:19.446 [2024-11-28T21:16:43.189Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.446 Nvme0n1 : 7.00 6492.00 25.36 0.00 0.00 0.00 0.00 0.00 00:09:19.446 [2024-11-28T21:16:43.189Z] =================================================================================================================== 00:09:19.446 [2024-11-28T21:16:43.189Z] Total : 6492.00 25.36 0.00 0.00 0.00 0.00 0.00 00:09:19.446 00:09:20.391 [2024-11-28T21:16:44.134Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.391 Nvme0n1 : 8.00 6442.50 25.17 0.00 0.00 0.00 0.00 0.00 00:09:20.391 [2024-11-28T21:16:44.134Z] =================================================================================================================== 00:09:20.391 [2024-11-28T21:16:44.134Z] Total : 6442.50 25.17 0.00 0.00 0.00 0.00 0.00 00:09:20.391 00:09:21.327 [2024-11-28T21:16:45.070Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.327 Nvme0n1 : 9.00 6418.11 25.07 0.00 0.00 0.00 0.00 0.00 00:09:21.327 [2024-11-28T21:16:45.070Z] =================================================================================================================== 00:09:21.327 [2024-11-28T21:16:45.070Z] Total : 6418.11 25.07 0.00 0.00 0.00 0.00 0.00 00:09:21.327 00:09:22.263 [2024-11-28T21:16:46.006Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.263 Nvme0n1 : 10.00 6411.30 25.04 0.00 0.00 0.00 0.00 0.00 00:09:22.263 [2024-11-28T21:16:46.006Z] =================================================================================================================== 00:09:22.263 [2024-11-28T21:16:46.006Z] Total : 6411.30 25.04 0.00 0.00 0.00 0.00 0.00 00:09:22.263 00:09:22.263 00:09:22.263 Latency(us) 00:09:22.263 [2024-11-28T21:16:46.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.263 [2024-11-28T21:16:46.006Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.263 Nvme0n1 : 10.01 6416.77 25.07 0.00 0.00 19941.95 3798.11 93895.21 00:09:22.263 [2024-11-28T21:16:46.006Z] =================================================================================================================== 00:09:22.263 [2024-11-28T21:16:46.006Z] Total : 6416.77 25.07 0.00 0.00 19941.95 3798.11 93895.21 00:09:22.263 0 00:09:22.263 21:16:45 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72781 00:09:22.263 21:16:45 -- common/autotest_common.sh@936 -- # '[' -z 72781 ']' 00:09:22.263 21:16:45 -- common/autotest_common.sh@940 -- # kill -0 72781 00:09:22.263 21:16:45 -- common/autotest_common.sh@941 -- # uname 00:09:22.263 21:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:22.263 21:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72781 00:09:22.522 21:16:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:22.522 21:16:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:22.522 killing process with pid 72781 00:09:22.522 21:16:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72781' 00:09:22.522 21:16:46 -- common/autotest_common.sh@955 -- # kill 72781 00:09:22.522 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.522 00:09:22.522 Latency(us) 00:09:22.522 [2024-11-28T21:16:46.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.522 [2024-11-28T21:16:46.265Z] =================================================================================================================== 00:09:22.522 [2024-11-28T21:16:46.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.522 21:16:46 -- common/autotest_common.sh@960 -- # wait 72781 00:09:22.522 21:16:46 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.781 21:16:46 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:22.781 21:16:46 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:23.040 21:16:46 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:23.040 21:16:46 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.040 21:16:46 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72422 00:09:23.040 21:16:46 -- target/nvmf_lvs_grow.sh@74 -- # wait 72422 00:09:23.040 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72422 Killed "${NVMF_APP[@]}" "$@" 00:09:23.040 21:16:46 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:23.040 21:16:46 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:23.040 21:16:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:23.040 21:16:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.040 21:16:46 -- common/autotest_common.sh@10 -- # set +x 00:09:23.040 21:16:46 -- nvmf/common.sh@469 -- # nvmfpid=72935 00:09:23.040 21:16:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.040 21:16:46 -- nvmf/common.sh@470 -- # waitforlisten 72935 00:09:23.040 21:16:46 -- common/autotest_common.sh@829 -- # '[' -z 72935 ']' 00:09:23.040 21:16:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.040 21:16:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.040 21:16:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.040 21:16:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.040 21:16:46 -- common/autotest_common.sh@10 -- # set +x 00:09:23.299 [2024-11-28 21:16:46.817591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:23.299 [2024-11-28 21:16:46.817674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.299 [2024-11-28 21:16:46.950041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.300 [2024-11-28 21:16:46.987417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:23.300 [2024-11-28 21:16:46.987563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.300 [2024-11-28 21:16:46.987576] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.300 [2024-11-28 21:16:46.987586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.300 [2024-11-28 21:16:46.987618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.236 21:16:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.236 21:16:47 -- common/autotest_common.sh@862 -- # return 0 00:09:24.236 21:16:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:24.236 21:16:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.236 21:16:47 -- common/autotest_common.sh@10 -- # set +x 00:09:24.236 21:16:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.236 21:16:47 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.495 [2024-11-28 21:16:48.013418] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.495 [2024-11-28 21:16:48.013754] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.495 [2024-11-28 21:16:48.013930] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.495 21:16:48 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:24.495 21:16:48 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:24.495 21:16:48 -- common/autotest_common.sh@897 -- # local bdev_name=90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:24.495 21:16:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:24.495 21:16:48 -- common/autotest_common.sh@899 -- # local i 00:09:24.495 21:16:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:24.495 21:16:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:24.495 21:16:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.755 21:16:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90a21474-0e72-4b99-a3e6-7b175814f7f9 -t 2000 00:09:24.755 [ 00:09:24.755 { 00:09:24.755 "name": "90a21474-0e72-4b99-a3e6-7b175814f7f9", 00:09:24.755 "aliases": [ 00:09:24.755 "lvs/lvol" 00:09:24.755 ], 00:09:24.755 "product_name": "Logical Volume", 00:09:24.755 "block_size": 4096, 00:09:24.755 "num_blocks": 38912, 00:09:24.755 "uuid": "90a21474-0e72-4b99-a3e6-7b175814f7f9", 00:09:24.755 "assigned_rate_limits": { 00:09:24.755 "rw_ios_per_sec": 0, 00:09:24.755 "rw_mbytes_per_sec": 0, 00:09:24.755 "r_mbytes_per_sec": 0, 00:09:24.755 "w_mbytes_per_sec": 0 00:09:24.755 }, 00:09:24.755 "claimed": false, 00:09:24.755 "zoned": false, 00:09:24.755 "supported_io_types": { 00:09:24.755 "read": true, 00:09:24.755 "write": true, 00:09:24.755 "unmap": true, 00:09:24.755 "write_zeroes": true, 00:09:24.755 "flush": false, 00:09:24.755 "reset": true, 00:09:24.755 "compare": false, 00:09:24.755 "compare_and_write": false, 00:09:24.755 "abort": false, 00:09:24.755 "nvme_admin": false, 00:09:24.755 "nvme_io": false 00:09:24.755 }, 00:09:24.755 "driver_specific": { 00:09:24.755 "lvol": { 00:09:24.755 "lvol_store_uuid": "a6e86780-c87a-471f-abdb-fbfe9efd5a18", 00:09:24.755 "base_bdev": "aio_bdev", 00:09:24.755 "thin_provision": false, 00:09:24.755 "snapshot": false, 00:09:24.755 "clone": false, 00:09:24.755 "esnap_clone": false 00:09:24.755 } 00:09:24.755 } 00:09:24.755 } 00:09:24.755 ] 00:09:24.755 21:16:48 -- common/autotest_common.sh@905 -- # return 0 00:09:24.755 21:16:48 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:24.755 21:16:48 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:25.332 21:16:48 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:25.332 21:16:48 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:25.332 21:16:48 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:25.332 21:16:49 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:25.332 21:16:49 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.594 [2024-11-28 21:16:49.243200] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:25.594 21:16:49 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:25.594 21:16:49 -- common/autotest_common.sh@650 -- # local es=0 00:09:25.594 21:16:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:25.594 21:16:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.594 21:16:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.594 21:16:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.594 21:16:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.594 21:16:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.594 21:16:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.594 21:16:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.594 21:16:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:25.594 21:16:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:25.854 request: 00:09:25.854 { 00:09:25.854 "uuid": "a6e86780-c87a-471f-abdb-fbfe9efd5a18", 00:09:25.854 "method": "bdev_lvol_get_lvstores", 00:09:25.854 "req_id": 1 00:09:25.854 } 00:09:25.854 Got JSON-RPC error response 00:09:25.854 response: 00:09:25.854 { 00:09:25.854 "code": -19, 00:09:25.854 "message": "No such device" 00:09:25.854 } 00:09:25.854 21:16:49 -- common/autotest_common.sh@653 -- # es=1 00:09:25.854 21:16:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.854 21:16:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.854 21:16:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.854 21:16:49 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.114 aio_bdev 00:09:26.114 21:16:49 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:26.114 21:16:49 -- common/autotest_common.sh@897 -- # local bdev_name=90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:26.114 21:16:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.114 21:16:49 -- common/autotest_common.sh@899 -- # local i 00:09:26.114 21:16:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.114 21:16:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.114 21:16:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:26.373 21:16:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90a21474-0e72-4b99-a3e6-7b175814f7f9 -t 2000 00:09:26.632 [ 00:09:26.632 { 00:09:26.632 "name": "90a21474-0e72-4b99-a3e6-7b175814f7f9", 00:09:26.632 "aliases": [ 00:09:26.632 "lvs/lvol" 00:09:26.632 ], 00:09:26.632 "product_name": "Logical Volume", 00:09:26.632 "block_size": 4096, 00:09:26.632 "num_blocks": 38912, 00:09:26.632 "uuid": "90a21474-0e72-4b99-a3e6-7b175814f7f9", 00:09:26.632 "assigned_rate_limits": { 00:09:26.632 "rw_ios_per_sec": 0, 00:09:26.632 "rw_mbytes_per_sec": 0, 00:09:26.632 "r_mbytes_per_sec": 0, 00:09:26.632 "w_mbytes_per_sec": 0 00:09:26.632 }, 00:09:26.632 "claimed": false, 00:09:26.632 "zoned": false, 00:09:26.632 "supported_io_types": { 00:09:26.632 "read": true, 00:09:26.632 "write": true, 00:09:26.632 "unmap": true, 00:09:26.632 "write_zeroes": true, 00:09:26.632 "flush": false, 00:09:26.632 "reset": true, 00:09:26.632 "compare": false, 00:09:26.632 "compare_and_write": false, 00:09:26.632 "abort": false, 00:09:26.632 "nvme_admin": false, 00:09:26.632 "nvme_io": false 00:09:26.632 }, 00:09:26.632 "driver_specific": { 00:09:26.632 "lvol": { 00:09:26.632 "lvol_store_uuid": "a6e86780-c87a-471f-abdb-fbfe9efd5a18", 00:09:26.632 "base_bdev": "aio_bdev", 00:09:26.632 "thin_provision": false, 00:09:26.632 "snapshot": false, 00:09:26.632 "clone": false, 00:09:26.632 "esnap_clone": false 00:09:26.632 } 00:09:26.632 } 00:09:26.632 } 00:09:26.632 ] 00:09:26.632 21:16:50 -- common/autotest_common.sh@905 -- # return 0 00:09:26.632 21:16:50 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:26.632 21:16:50 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:26.891 21:16:50 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:26.891 21:16:50 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:26.891 21:16:50 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:27.150 21:16:50 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:27.150 21:16:50 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 90a21474-0e72-4b99-a3e6-7b175814f7f9 00:09:27.409 21:16:50 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6e86780-c87a-471f-abdb-fbfe9efd5a18 00:09:27.668 21:16:51 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.927 21:16:51 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:28.186 00:09:28.186 real 0m20.131s 00:09:28.186 user 0m41.219s 00:09:28.186 sys 0m9.034s 00:09:28.186 ************************************ 00:09:28.186 END TEST lvs_grow_dirty 00:09:28.186 ************************************ 00:09:28.186 21:16:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.186 21:16:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.186 21:16:51 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:28.186 21:16:51 -- common/autotest_common.sh@806 -- # type=--id 00:09:28.186 21:16:51 -- common/autotest_common.sh@807 -- # id=0 00:09:28.186 21:16:51 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:28.186 21:16:51 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:28.186 21:16:51 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:28.186 21:16:51 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:28.186 21:16:51 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:28.186 21:16:51 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:28.186 nvmf_trace.0 00:09:28.186 21:16:51 -- common/autotest_common.sh@821 -- # return 0 00:09:28.186 21:16:51 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:28.186 21:16:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:28.186 21:16:51 -- nvmf/common.sh@116 -- # sync 00:09:28.446 21:16:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:28.446 21:16:52 -- nvmf/common.sh@119 -- # set +e 00:09:28.446 21:16:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:28.446 21:16:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:28.446 rmmod nvme_tcp 00:09:28.446 rmmod nvme_fabrics 00:09:28.446 rmmod nvme_keyring 00:09:28.446 21:16:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:28.446 21:16:52 -- nvmf/common.sh@123 -- # set -e 00:09:28.446 21:16:52 -- nvmf/common.sh@124 -- # return 0 00:09:28.446 21:16:52 -- nvmf/common.sh@477 -- # '[' -n 72935 ']' 00:09:28.446 21:16:52 -- nvmf/common.sh@478 -- # killprocess 72935 00:09:28.446 21:16:52 -- common/autotest_common.sh@936 -- # '[' -z 72935 ']' 00:09:28.446 21:16:52 -- common/autotest_common.sh@940 -- # kill -0 72935 00:09:28.446 21:16:52 -- common/autotest_common.sh@941 -- # uname 00:09:28.446 21:16:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:28.446 21:16:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72935 00:09:28.446 21:16:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:28.446 killing process with pid 72935 00:09:28.446 21:16:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:28.446 21:16:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72935' 00:09:28.446 21:16:52 -- common/autotest_common.sh@955 -- # kill 72935 00:09:28.446 21:16:52 -- common/autotest_common.sh@960 -- # wait 72935 00:09:28.705 21:16:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:28.705 21:16:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:28.705 21:16:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:28.705 21:16:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.705 21:16:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:28.705 21:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.705 21:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.705 21:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.705 21:16:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:28.705 ************************************ 00:09:28.705 END TEST nvmf_lvs_grow 00:09:28.705 ************************************ 00:09:28.705 00:09:28.705 real 0m40.800s 00:09:28.705 user 1m4.750s 00:09:28.705 sys 0m12.013s 00:09:28.705 21:16:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.705 21:16:52 -- common/autotest_common.sh@10 -- # set +x 00:09:28.705 21:16:52 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:28.705 21:16:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:28.705 21:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.705 21:16:52 -- common/autotest_common.sh@10 -- # set +x 00:09:28.705 ************************************ 00:09:28.705 START TEST nvmf_bdev_io_wait 00:09:28.705 ************************************ 00:09:28.705 21:16:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:28.705 * Looking for test storage... 00:09:28.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.705 21:16:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:28.705 21:16:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:28.705 21:16:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:28.964 21:16:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:28.964 21:16:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:28.964 21:16:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:28.964 21:16:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:28.964 21:16:52 -- scripts/common.sh@335 -- # IFS=.-: 00:09:28.964 21:16:52 -- scripts/common.sh@335 -- # read -ra ver1 00:09:28.964 21:16:52 -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.964 21:16:52 -- scripts/common.sh@336 -- # read -ra ver2 00:09:28.964 21:16:52 -- scripts/common.sh@337 -- # local 'op=<' 00:09:28.964 21:16:52 -- scripts/common.sh@339 -- # ver1_l=2 00:09:28.964 21:16:52 -- scripts/common.sh@340 -- # ver2_l=1 00:09:28.964 21:16:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:28.964 21:16:52 -- scripts/common.sh@343 -- # case "$op" in 00:09:28.964 21:16:52 -- scripts/common.sh@344 -- # : 1 00:09:28.964 21:16:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:28.964 21:16:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.964 21:16:52 -- scripts/common.sh@364 -- # decimal 1 00:09:28.964 21:16:52 -- scripts/common.sh@352 -- # local d=1 00:09:28.964 21:16:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.964 21:16:52 -- scripts/common.sh@354 -- # echo 1 00:09:28.964 21:16:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:28.964 21:16:52 -- scripts/common.sh@365 -- # decimal 2 00:09:28.964 21:16:52 -- scripts/common.sh@352 -- # local d=2 00:09:28.964 21:16:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.964 21:16:52 -- scripts/common.sh@354 -- # echo 2 00:09:28.964 21:16:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:28.964 21:16:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:28.964 21:16:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:28.964 21:16:52 -- scripts/common.sh@367 -- # return 0 00:09:28.964 21:16:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.964 21:16:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.964 --rc genhtml_branch_coverage=1 00:09:28.964 --rc genhtml_function_coverage=1 00:09:28.964 --rc genhtml_legend=1 00:09:28.964 --rc geninfo_all_blocks=1 00:09:28.964 --rc geninfo_unexecuted_blocks=1 00:09:28.964 00:09:28.964 ' 00:09:28.964 21:16:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.964 --rc genhtml_branch_coverage=1 00:09:28.964 --rc genhtml_function_coverage=1 00:09:28.964 --rc genhtml_legend=1 00:09:28.964 --rc geninfo_all_blocks=1 00:09:28.964 --rc geninfo_unexecuted_blocks=1 00:09:28.964 00:09:28.964 ' 00:09:28.964 21:16:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.964 --rc genhtml_branch_coverage=1 00:09:28.964 --rc genhtml_function_coverage=1 00:09:28.964 --rc genhtml_legend=1 00:09:28.964 --rc geninfo_all_blocks=1 00:09:28.964 --rc geninfo_unexecuted_blocks=1 00:09:28.964 00:09:28.964 ' 00:09:28.964 21:16:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.964 --rc genhtml_branch_coverage=1 00:09:28.964 --rc genhtml_function_coverage=1 00:09:28.964 --rc genhtml_legend=1 00:09:28.964 --rc geninfo_all_blocks=1 00:09:28.964 --rc geninfo_unexecuted_blocks=1 00:09:28.964 00:09:28.964 ' 00:09:28.964 21:16:52 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.964 21:16:52 -- nvmf/common.sh@7 -- # uname -s 00:09:28.964 21:16:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.964 21:16:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.964 21:16:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.964 21:16:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.964 21:16:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.964 21:16:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.964 21:16:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.964 21:16:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.964 21:16:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.964 21:16:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.964 21:16:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:09:28.964 21:16:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:09:28.964 21:16:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.964 21:16:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.965 21:16:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.965 21:16:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.965 21:16:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.965 21:16:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.965 21:16:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.965 21:16:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.965 21:16:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.965 21:16:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.965 21:16:52 -- paths/export.sh@5 -- # export PATH 00:09:28.965 21:16:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.965 21:16:52 -- nvmf/common.sh@46 -- # : 0 00:09:28.965 21:16:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:28.965 21:16:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:28.965 21:16:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:28.965 21:16:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.965 21:16:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.965 21:16:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:28.965 21:16:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:28.965 21:16:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:28.965 21:16:52 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.965 21:16:52 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.965 21:16:52 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:28.965 21:16:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:28.965 21:16:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.965 21:16:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:28.965 21:16:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:28.965 21:16:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:28.965 21:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.965 21:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.965 21:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.965 21:16:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:28.965 21:16:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:28.965 21:16:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:28.965 21:16:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:28.965 21:16:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:28.965 21:16:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:28.965 21:16:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.965 21:16:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.965 21:16:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:28.965 21:16:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:28.965 21:16:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.965 21:16:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.965 21:16:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.965 21:16:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.965 21:16:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.965 21:16:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.965 21:16:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.965 21:16:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.965 21:16:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:28.965 21:16:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:28.965 Cannot find device "nvmf_tgt_br" 00:09:28.965 21:16:52 -- nvmf/common.sh@154 -- # true 00:09:28.965 21:16:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.965 Cannot find device "nvmf_tgt_br2" 00:09:28.965 21:16:52 -- nvmf/common.sh@155 -- # true 00:09:28.965 21:16:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:28.965 21:16:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:28.965 Cannot find device "nvmf_tgt_br" 00:09:28.965 21:16:52 -- nvmf/common.sh@157 -- # true 00:09:28.965 21:16:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:28.965 Cannot find device "nvmf_tgt_br2" 00:09:28.965 21:16:52 -- nvmf/common.sh@158 -- # true 00:09:28.965 21:16:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:28.965 21:16:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:28.965 21:16:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.965 21:16:52 -- nvmf/common.sh@161 -- # true 00:09:28.965 21:16:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.965 21:16:52 -- nvmf/common.sh@162 -- # true 00:09:28.965 21:16:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.965 21:16:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.965 21:16:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.225 21:16:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.225 21:16:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.225 21:16:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.225 21:16:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.225 21:16:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:29.225 21:16:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:29.225 21:16:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:29.225 21:16:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:29.225 21:16:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:29.225 21:16:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:29.225 21:16:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.225 21:16:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.225 21:16:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.225 21:16:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:29.225 21:16:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:29.225 21:16:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.225 21:16:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.225 21:16:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.225 21:16:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.225 21:16:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.225 21:16:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:29.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:29.225 00:09:29.225 --- 10.0.0.2 ping statistics --- 00:09:29.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.225 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:29.225 21:16:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:29.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:29.225 00:09:29.225 --- 10.0.0.3 ping statistics --- 00:09:29.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.225 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:29.225 21:16:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:29.225 00:09:29.225 --- 10.0.0.1 ping statistics --- 00:09:29.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.225 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:29.225 21:16:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.225 21:16:52 -- nvmf/common.sh@421 -- # return 0 00:09:29.225 21:16:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:29.225 21:16:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.225 21:16:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:29.225 21:16:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:29.225 21:16:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.225 21:16:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:29.225 21:16:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:29.225 21:16:52 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:29.225 21:16:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:29.225 21:16:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.225 21:16:52 -- common/autotest_common.sh@10 -- # set +x 00:09:29.225 21:16:52 -- nvmf/common.sh@469 -- # nvmfpid=73249 00:09:29.225 21:16:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:29.225 21:16:52 -- nvmf/common.sh@470 -- # waitforlisten 73249 00:09:29.225 21:16:52 -- common/autotest_common.sh@829 -- # '[' -z 73249 ']' 00:09:29.225 21:16:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.225 21:16:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.225 21:16:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.225 21:16:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.225 21:16:52 -- common/autotest_common.sh@10 -- # set +x 00:09:29.225 [2024-11-28 21:16:52.941187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:29.225 [2024-11-28 21:16:52.941287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.484 [2024-11-28 21:16:53.074375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.484 [2024-11-28 21:16:53.110949] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:29.484 [2024-11-28 21:16:53.111140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.484 [2024-11-28 21:16:53.111152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.484 [2024-11-28 21:16:53.111160] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.484 [2024-11-28 21:16:53.111264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.484 [2024-11-28 21:16:53.112760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.484 [2024-11-28 21:16:53.112954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.484 [2024-11-28 21:16:53.112964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.743 21:16:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.743 21:16:53 -- common/autotest_common.sh@862 -- # return 0 00:09:29.743 21:16:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:29.743 21:16:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.743 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.743 21:16:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.743 21:16:53 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:29.743 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.743 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:29.744 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.744 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.744 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.744 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 [2024-11-28 21:16:53.326660] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.744 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.744 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 Malloc0 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.744 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.744 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.744 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.744 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.744 21:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.744 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:09:29.744 [2024-11-28 21:16:53.387932] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.744 21:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73282 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@30 -- # READ_PID=73284 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # config=() 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # local subsystem config 00:09:29.744 21:16:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:29.744 { 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme$subsystem", 00:09:29.744 "trtype": "$TEST_TRANSPORT", 00:09:29.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "$NVMF_PORT", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.744 "hdgst": ${hdgst:-false}, 00:09:29.744 "ddgst": ${ddgst:-false} 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 } 00:09:29.744 EOF 00:09:29.744 )") 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73286 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # config=() 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # local subsystem config 00:09:29.744 21:16:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:29.744 { 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme$subsystem", 00:09:29.744 "trtype": "$TEST_TRANSPORT", 00:09:29.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "$NVMF_PORT", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.744 "hdgst": ${hdgst:-false}, 00:09:29.744 "ddgst": ${ddgst:-false} 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 } 00:09:29.744 EOF 00:09:29.744 )") 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73289 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # cat 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@35 -- # sync 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # cat 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # config=() 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # local subsystem config 00:09:29.744 21:16:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:29.744 { 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme$subsystem", 00:09:29.744 "trtype": "$TEST_TRANSPORT", 00:09:29.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "$NVMF_PORT", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.744 "hdgst": ${hdgst:-false}, 00:09:29.744 "ddgst": ${ddgst:-false} 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 } 00:09:29.744 EOF 00:09:29.744 )") 00:09:29.744 21:16:53 -- nvmf/common.sh@544 -- # jq . 00:09:29.744 21:16:53 -- nvmf/common.sh@544 -- # jq . 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # cat 00:09:29.744 21:16:53 -- nvmf/common.sh@545 -- # IFS=, 00:09:29.744 21:16:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme1", 00:09:29.744 "trtype": "tcp", 00:09:29.744 "traddr": "10.0.0.2", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "4420", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.744 "hdgst": false, 00:09:29.744 "ddgst": false 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 }' 00:09:29.744 21:16:53 -- nvmf/common.sh@545 -- # IFS=, 00:09:29.744 21:16:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme1", 00:09:29.744 "trtype": "tcp", 00:09:29.744 "traddr": "10.0.0.2", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "4420", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.744 "hdgst": false, 00:09:29.744 "ddgst": false 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 }' 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # config=() 00:09:29.744 21:16:53 -- nvmf/common.sh@520 -- # local subsystem config 00:09:29.744 21:16:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:29.744 { 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme$subsystem", 00:09:29.744 "trtype": "$TEST_TRANSPORT", 00:09:29.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "$NVMF_PORT", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.744 "hdgst": ${hdgst:-false}, 00:09:29.744 "ddgst": ${ddgst:-false} 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 } 00:09:29.744 EOF 00:09:29.744 )") 00:09:29.744 21:16:53 -- nvmf/common.sh@544 -- # jq . 00:09:29.744 21:16:53 -- nvmf/common.sh@542 -- # cat 00:09:29.744 21:16:53 -- nvmf/common.sh@545 -- # IFS=, 00:09:29.744 21:16:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme1", 00:09:29.744 "trtype": "tcp", 00:09:29.744 "traddr": "10.0.0.2", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "4420", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.744 "hdgst": false, 00:09:29.744 "ddgst": false 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 }' 00:09:29.744 21:16:53 -- nvmf/common.sh@544 -- # jq . 00:09:29.744 21:16:53 -- nvmf/common.sh@545 -- # IFS=, 00:09:29.744 21:16:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:29.744 "params": { 00:09:29.744 "name": "Nvme1", 00:09:29.744 "trtype": "tcp", 00:09:29.744 "traddr": "10.0.0.2", 00:09:29.744 "adrfam": "ipv4", 00:09:29.744 "trsvcid": "4420", 00:09:29.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.744 "hdgst": false, 00:09:29.744 "ddgst": false 00:09:29.744 }, 00:09:29.744 "method": "bdev_nvme_attach_controller" 00:09:29.744 }' 00:09:29.744 [2024-11-28 21:16:53.442806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:29.744 [2024-11-28 21:16:53.442888] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:29.744 [2024-11-28 21:16:53.447383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:29.744 [2024-11-28 21:16:53.447469] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:29.744 21:16:53 -- target/bdev_io_wait.sh@37 -- # wait 73282 00:09:29.745 [2024-11-28 21:16:53.477811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:29.745 [2024-11-28 21:16:53.478602] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:30.004 [2024-11-28 21:16:53.486598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:30.004 [2024-11-28 21:16:53.486687] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:30.004 [2024-11-28 21:16:53.626310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.004 [2024-11-28 21:16:53.650951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:30.004 [2024-11-28 21:16:53.664842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.004 [2024-11-28 21:16:53.689409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:30.004 [2024-11-28 21:16:53.714473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.004 [2024-11-28 21:16:53.739272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:30.262 [2024-11-28 21:16:53.758703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.262 Running I/O for 1 seconds... 00:09:30.262 [2024-11-28 21:16:53.788271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:30.262 Running I/O for 1 seconds... 00:09:30.262 Running I/O for 1 seconds... 00:09:30.262 Running I/O for 1 seconds... 00:09:31.198 00:09:31.198 Latency(us) 00:09:31.198 [2024-11-28T21:16:54.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.198 [2024-11-28T21:16:54.941Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:31.198 Nvme1n1 : 1.03 6299.50 24.61 0.00 0.00 20030.77 8757.99 34555.35 00:09:31.198 [2024-11-28T21:16:54.941Z] =================================================================================================================== 00:09:31.198 [2024-11-28T21:16:54.941Z] Total : 6299.50 24.61 0.00 0.00 20030.77 8757.99 34555.35 00:09:31.198 00:09:31.198 Latency(us) 00:09:31.198 [2024-11-28T21:16:54.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.198 [2024-11-28T21:16:54.941Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:31.198 Nvme1n1 : 1.01 8605.20 33.61 0.00 0.00 14793.70 9949.56 27644.28 00:09:31.198 [2024-11-28T21:16:54.941Z] =================================================================================================================== 00:09:31.198 [2024-11-28T21:16:54.941Z] Total : 8605.20 33.61 0.00 0.00 14793.70 9949.56 27644.28 00:09:31.198 00:09:31.198 Latency(us) 00:09:31.198 [2024-11-28T21:16:54.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.198 [2024-11-28T21:16:54.942Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:31.199 Nvme1n1 : 1.00 148283.50 579.23 0.00 0.00 860.12 336.99 2398.02 00:09:31.199 [2024-11-28T21:16:54.942Z] =================================================================================================================== 00:09:31.199 [2024-11-28T21:16:54.942Z] Total : 148283.50 579.23 0.00 0.00 860.12 336.99 2398.02 00:09:31.199 00:09:31.199 Latency(us) 00:09:31.199 [2024-11-28T21:16:54.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.199 [2024-11-28T21:16:54.942Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:31.199 Nvme1n1 : 1.00 6754.07 26.38 0.00 0.00 18896.74 4915.20 41704.73 00:09:31.199 [2024-11-28T21:16:54.942Z] =================================================================================================================== 00:09:31.199 [2024-11-28T21:16:54.942Z] Total : 6754.07 26.38 0.00 0.00 18896.74 4915.20 41704.73 00:09:31.492 21:16:54 -- target/bdev_io_wait.sh@38 -- # wait 73284 00:09:31.492 21:16:54 -- target/bdev_io_wait.sh@39 -- # wait 73286 00:09:31.492 21:16:55 -- target/bdev_io_wait.sh@40 -- # wait 73289 00:09:31.492 21:16:55 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.492 21:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.492 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:31.492 21:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.492 21:16:55 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:31.492 21:16:55 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:31.492 21:16:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:31.492 21:16:55 -- nvmf/common.sh@116 -- # sync 00:09:31.492 21:16:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:31.492 21:16:55 -- nvmf/common.sh@119 -- # set +e 00:09:31.492 21:16:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:31.492 21:16:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:31.492 rmmod nvme_tcp 00:09:31.492 rmmod nvme_fabrics 00:09:31.492 rmmod nvme_keyring 00:09:31.492 21:16:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:31.492 21:16:55 -- nvmf/common.sh@123 -- # set -e 00:09:31.492 21:16:55 -- nvmf/common.sh@124 -- # return 0 00:09:31.492 21:16:55 -- nvmf/common.sh@477 -- # '[' -n 73249 ']' 00:09:31.492 21:16:55 -- nvmf/common.sh@478 -- # killprocess 73249 00:09:31.492 21:16:55 -- common/autotest_common.sh@936 -- # '[' -z 73249 ']' 00:09:31.492 21:16:55 -- common/autotest_common.sh@940 -- # kill -0 73249 00:09:31.492 21:16:55 -- common/autotest_common.sh@941 -- # uname 00:09:31.492 21:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:31.492 21:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73249 00:09:31.492 21:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:31.492 21:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:31.492 killing process with pid 73249 00:09:31.492 21:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73249' 00:09:31.492 21:16:55 -- common/autotest_common.sh@955 -- # kill 73249 00:09:31.492 21:16:55 -- common/autotest_common.sh@960 -- # wait 73249 00:09:31.767 21:16:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:31.767 21:16:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:31.767 21:16:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:31.767 21:16:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.767 21:16:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:31.767 21:16:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.767 21:16:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.767 21:16:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.767 21:16:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:31.767 00:09:31.767 real 0m3.044s 00:09:31.767 user 0m13.086s 00:09:31.767 sys 0m1.911s 00:09:31.767 21:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.767 ************************************ 00:09:31.767 END TEST nvmf_bdev_io_wait 00:09:31.767 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:31.767 ************************************ 00:09:31.767 21:16:55 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:31.767 21:16:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:31.767 21:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.767 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:31.767 ************************************ 00:09:31.767 START TEST nvmf_queue_depth 00:09:31.767 ************************************ 00:09:31.767 21:16:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:31.767 * Looking for test storage... 00:09:31.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.767 21:16:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:31.767 21:16:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:31.767 21:16:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:32.025 21:16:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:32.025 21:16:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:32.025 21:16:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:32.025 21:16:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:32.025 21:16:55 -- scripts/common.sh@335 -- # IFS=.-: 00:09:32.025 21:16:55 -- scripts/common.sh@335 -- # read -ra ver1 00:09:32.025 21:16:55 -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.025 21:16:55 -- scripts/common.sh@336 -- # read -ra ver2 00:09:32.025 21:16:55 -- scripts/common.sh@337 -- # local 'op=<' 00:09:32.025 21:16:55 -- scripts/common.sh@339 -- # ver1_l=2 00:09:32.025 21:16:55 -- scripts/common.sh@340 -- # ver2_l=1 00:09:32.025 21:16:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:32.025 21:16:55 -- scripts/common.sh@343 -- # case "$op" in 00:09:32.025 21:16:55 -- scripts/common.sh@344 -- # : 1 00:09:32.025 21:16:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:32.025 21:16:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.025 21:16:55 -- scripts/common.sh@364 -- # decimal 1 00:09:32.025 21:16:55 -- scripts/common.sh@352 -- # local d=1 00:09:32.025 21:16:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.025 21:16:55 -- scripts/common.sh@354 -- # echo 1 00:09:32.025 21:16:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:32.025 21:16:55 -- scripts/common.sh@365 -- # decimal 2 00:09:32.025 21:16:55 -- scripts/common.sh@352 -- # local d=2 00:09:32.025 21:16:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.025 21:16:55 -- scripts/common.sh@354 -- # echo 2 00:09:32.025 21:16:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:32.025 21:16:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:32.025 21:16:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:32.025 21:16:55 -- scripts/common.sh@367 -- # return 0 00:09:32.025 21:16:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.025 21:16:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:32.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.025 --rc genhtml_branch_coverage=1 00:09:32.025 --rc genhtml_function_coverage=1 00:09:32.025 --rc genhtml_legend=1 00:09:32.025 --rc geninfo_all_blocks=1 00:09:32.025 --rc geninfo_unexecuted_blocks=1 00:09:32.025 00:09:32.025 ' 00:09:32.025 21:16:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:32.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.025 --rc genhtml_branch_coverage=1 00:09:32.025 --rc genhtml_function_coverage=1 00:09:32.025 --rc genhtml_legend=1 00:09:32.025 --rc geninfo_all_blocks=1 00:09:32.026 --rc geninfo_unexecuted_blocks=1 00:09:32.026 00:09:32.026 ' 00:09:32.026 21:16:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:32.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.026 --rc genhtml_branch_coverage=1 00:09:32.026 --rc genhtml_function_coverage=1 00:09:32.026 --rc genhtml_legend=1 00:09:32.026 --rc geninfo_all_blocks=1 00:09:32.026 --rc geninfo_unexecuted_blocks=1 00:09:32.026 00:09:32.026 ' 00:09:32.026 21:16:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:32.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.026 --rc genhtml_branch_coverage=1 00:09:32.026 --rc genhtml_function_coverage=1 00:09:32.026 --rc genhtml_legend=1 00:09:32.026 --rc geninfo_all_blocks=1 00:09:32.026 --rc geninfo_unexecuted_blocks=1 00:09:32.026 00:09:32.026 ' 00:09:32.026 21:16:55 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.026 21:16:55 -- nvmf/common.sh@7 -- # uname -s 00:09:32.026 21:16:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.026 21:16:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.026 21:16:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.026 21:16:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.026 21:16:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.026 21:16:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.026 21:16:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.026 21:16:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.026 21:16:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.026 21:16:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.026 21:16:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:09:32.026 21:16:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:09:32.026 21:16:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.026 21:16:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.026 21:16:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.026 21:16:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.026 21:16:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.026 21:16:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.026 21:16:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.026 21:16:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.026 21:16:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.026 21:16:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.026 21:16:55 -- paths/export.sh@5 -- # export PATH 00:09:32.026 21:16:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.026 21:16:55 -- nvmf/common.sh@46 -- # : 0 00:09:32.026 21:16:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:32.026 21:16:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:32.026 21:16:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:32.026 21:16:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.026 21:16:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.026 21:16:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:32.026 21:16:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:32.026 21:16:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:32.026 21:16:55 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:32.026 21:16:55 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:32.026 21:16:55 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:32.026 21:16:55 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:32.026 21:16:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:32.026 21:16:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.026 21:16:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:32.026 21:16:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:32.026 21:16:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:32.026 21:16:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.026 21:16:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.026 21:16:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.026 21:16:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:32.026 21:16:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:32.026 21:16:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:32.026 21:16:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:32.026 21:16:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:32.026 21:16:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:32.026 21:16:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.026 21:16:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.026 21:16:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:32.026 21:16:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:32.026 21:16:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.026 21:16:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.026 21:16:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.026 21:16:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.026 21:16:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.026 21:16:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.026 21:16:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.026 21:16:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.026 21:16:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:32.026 21:16:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:32.026 Cannot find device "nvmf_tgt_br" 00:09:32.026 21:16:55 -- nvmf/common.sh@154 -- # true 00:09:32.026 21:16:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.026 Cannot find device "nvmf_tgt_br2" 00:09:32.026 21:16:55 -- nvmf/common.sh@155 -- # true 00:09:32.026 21:16:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:32.026 21:16:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:32.026 Cannot find device "nvmf_tgt_br" 00:09:32.026 21:16:55 -- nvmf/common.sh@157 -- # true 00:09:32.026 21:16:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:32.026 Cannot find device "nvmf_tgt_br2" 00:09:32.026 21:16:55 -- nvmf/common.sh@158 -- # true 00:09:32.026 21:16:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:32.026 21:16:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:32.026 21:16:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.026 21:16:55 -- nvmf/common.sh@161 -- # true 00:09:32.026 21:16:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.285 21:16:55 -- nvmf/common.sh@162 -- # true 00:09:32.285 21:16:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.285 21:16:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.285 21:16:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.285 21:16:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.285 21:16:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.285 21:16:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.285 21:16:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.285 21:16:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:32.285 21:16:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:32.285 21:16:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:32.285 21:16:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:32.285 21:16:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:32.285 21:16:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:32.285 21:16:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.285 21:16:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.285 21:16:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.285 21:16:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:32.285 21:16:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:32.285 21:16:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.285 21:16:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.285 21:16:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.285 21:16:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.285 21:16:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.285 21:16:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:32.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:32.285 00:09:32.285 --- 10.0.0.2 ping statistics --- 00:09:32.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.285 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:32.285 21:16:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:32.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:09:32.285 00:09:32.285 --- 10.0.0.3 ping statistics --- 00:09:32.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.285 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:32.285 21:16:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:32.285 00:09:32.285 --- 10.0.0.1 ping statistics --- 00:09:32.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.285 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:32.285 21:16:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.285 21:16:55 -- nvmf/common.sh@421 -- # return 0 00:09:32.285 21:16:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:32.285 21:16:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.285 21:16:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:32.285 21:16:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:32.285 21:16:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.285 21:16:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:32.285 21:16:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:32.285 21:16:55 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:32.285 21:16:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:32.285 21:16:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.285 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:32.285 21:16:55 -- nvmf/common.sh@469 -- # nvmfpid=73496 00:09:32.285 21:16:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:32.285 21:16:55 -- nvmf/common.sh@470 -- # waitforlisten 73496 00:09:32.285 21:16:55 -- common/autotest_common.sh@829 -- # '[' -z 73496 ']' 00:09:32.285 21:16:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.285 21:16:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.285 21:16:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.285 21:16:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.285 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:32.544 [2024-11-28 21:16:56.043155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:32.544 [2024-11-28 21:16:56.043237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.544 [2024-11-28 21:16:56.184723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.544 [2024-11-28 21:16:56.215208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:32.544 [2024-11-28 21:16:56.215342] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.544 [2024-11-28 21:16:56.215353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.544 [2024-11-28 21:16:56.215360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.544 [2024-11-28 21:16:56.215389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.544 21:16:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.545 21:16:56 -- common/autotest_common.sh@862 -- # return 0 00:09:32.545 21:16:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:32.545 21:16:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.545 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.803 21:16:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.803 21:16:56 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.803 21:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.803 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.803 [2024-11-28 21:16:56.328701] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.803 21:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.803 21:16:56 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.803 21:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.803 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.803 Malloc0 00:09:32.803 21:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.803 21:16:56 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.803 21:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.804 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.804 21:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.804 21:16:56 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.804 21:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.804 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.804 21:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.804 21:16:56 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.804 21:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.804 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.804 [2024-11-28 21:16:56.380176] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.804 21:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.804 21:16:56 -- target/queue_depth.sh@30 -- # bdevperf_pid=73525 00:09:32.804 21:16:56 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:32.804 21:16:56 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:32.804 21:16:56 -- target/queue_depth.sh@33 -- # waitforlisten 73525 /var/tmp/bdevperf.sock 00:09:32.804 21:16:56 -- common/autotest_common.sh@829 -- # '[' -z 73525 ']' 00:09:32.804 21:16:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:32.804 21:16:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:32.804 21:16:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:32.804 21:16:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.804 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.804 [2024-11-28 21:16:56.427895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:32.804 [2024-11-28 21:16:56.427970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73525 ] 00:09:33.063 [2024-11-28 21:16:56.563510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.063 [2024-11-28 21:16:56.603611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.000 21:16:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.000 21:16:57 -- common/autotest_common.sh@862 -- # return 0 00:09:34.000 21:16:57 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:34.000 21:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.000 21:16:57 -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 NVMe0n1 00:09:34.000 21:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.000 21:16:57 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:34.000 Running I/O for 10 seconds... 00:09:43.975 00:09:43.975 Latency(us) 00:09:43.975 [2024-11-28T21:17:07.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.975 [2024-11-28T21:17:07.718Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:43.975 Verification LBA range: start 0x0 length 0x4000 00:09:43.975 NVMe0n1 : 10.06 15087.84 58.94 0.00 0.00 67627.94 12571.00 55765.18 00:09:43.975 [2024-11-28T21:17:07.718Z] =================================================================================================================== 00:09:43.975 [2024-11-28T21:17:07.718Z] Total : 15087.84 58.94 0.00 0.00 67627.94 12571.00 55765.18 00:09:43.975 0 00:09:43.975 21:17:07 -- target/queue_depth.sh@39 -- # killprocess 73525 00:09:43.975 21:17:07 -- common/autotest_common.sh@936 -- # '[' -z 73525 ']' 00:09:43.975 21:17:07 -- common/autotest_common.sh@940 -- # kill -0 73525 00:09:43.975 21:17:07 -- common/autotest_common.sh@941 -- # uname 00:09:43.975 21:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.975 21:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73525 00:09:44.233 21:17:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:44.233 21:17:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:44.233 killing process with pid 73525 00:09:44.233 21:17:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73525' 00:09:44.233 Received shutdown signal, test time was about 10.000000 seconds 00:09:44.233 00:09:44.233 Latency(us) 00:09:44.233 [2024-11-28T21:17:07.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.233 [2024-11-28T21:17:07.976Z] =================================================================================================================== 00:09:44.233 [2024-11-28T21:17:07.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:44.233 21:17:07 -- common/autotest_common.sh@955 -- # kill 73525 00:09:44.233 21:17:07 -- common/autotest_common.sh@960 -- # wait 73525 00:09:44.233 21:17:07 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:44.233 21:17:07 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:44.233 21:17:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:44.233 21:17:07 -- nvmf/common.sh@116 -- # sync 00:09:44.233 21:17:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:44.233 21:17:07 -- nvmf/common.sh@119 -- # set +e 00:09:44.233 21:17:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:44.233 21:17:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:44.233 rmmod nvme_tcp 00:09:44.233 rmmod nvme_fabrics 00:09:44.233 rmmod nvme_keyring 00:09:44.491 21:17:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:44.491 21:17:07 -- nvmf/common.sh@123 -- # set -e 00:09:44.491 21:17:07 -- nvmf/common.sh@124 -- # return 0 00:09:44.491 21:17:07 -- nvmf/common.sh@477 -- # '[' -n 73496 ']' 00:09:44.491 21:17:07 -- nvmf/common.sh@478 -- # killprocess 73496 00:09:44.491 21:17:07 -- common/autotest_common.sh@936 -- # '[' -z 73496 ']' 00:09:44.491 21:17:07 -- common/autotest_common.sh@940 -- # kill -0 73496 00:09:44.491 21:17:07 -- common/autotest_common.sh@941 -- # uname 00:09:44.491 21:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.491 21:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73496 00:09:44.491 21:17:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:44.491 21:17:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:44.491 killing process with pid 73496 00:09:44.491 21:17:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73496' 00:09:44.491 21:17:08 -- common/autotest_common.sh@955 -- # kill 73496 00:09:44.491 21:17:08 -- common/autotest_common.sh@960 -- # wait 73496 00:09:44.491 21:17:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:44.491 21:17:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:44.491 21:17:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:44.491 21:17:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.491 21:17:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:44.491 21:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.491 21:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.491 21:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.491 21:17:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:44.491 00:09:44.491 real 0m12.783s 00:09:44.491 user 0m22.722s 00:09:44.491 sys 0m1.892s 00:09:44.491 21:17:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.491 21:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:44.491 ************************************ 00:09:44.491 END TEST nvmf_queue_depth 00:09:44.491 ************************************ 00:09:44.751 21:17:08 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:44.751 21:17:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:44.751 21:17:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.751 21:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:44.751 ************************************ 00:09:44.751 START TEST nvmf_multipath 00:09:44.751 ************************************ 00:09:44.751 21:17:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:44.751 * Looking for test storage... 00:09:44.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.751 21:17:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:44.751 21:17:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:44.751 21:17:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:44.751 21:17:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:44.751 21:17:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:44.751 21:17:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:44.751 21:17:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:44.751 21:17:08 -- scripts/common.sh@335 -- # IFS=.-: 00:09:44.751 21:17:08 -- scripts/common.sh@335 -- # read -ra ver1 00:09:44.751 21:17:08 -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.751 21:17:08 -- scripts/common.sh@336 -- # read -ra ver2 00:09:44.751 21:17:08 -- scripts/common.sh@337 -- # local 'op=<' 00:09:44.751 21:17:08 -- scripts/common.sh@339 -- # ver1_l=2 00:09:44.751 21:17:08 -- scripts/common.sh@340 -- # ver2_l=1 00:09:44.751 21:17:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:44.751 21:17:08 -- scripts/common.sh@343 -- # case "$op" in 00:09:44.751 21:17:08 -- scripts/common.sh@344 -- # : 1 00:09:44.751 21:17:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:44.751 21:17:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.751 21:17:08 -- scripts/common.sh@364 -- # decimal 1 00:09:44.751 21:17:08 -- scripts/common.sh@352 -- # local d=1 00:09:44.751 21:17:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.751 21:17:08 -- scripts/common.sh@354 -- # echo 1 00:09:44.751 21:17:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:44.751 21:17:08 -- scripts/common.sh@365 -- # decimal 2 00:09:44.751 21:17:08 -- scripts/common.sh@352 -- # local d=2 00:09:44.751 21:17:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.751 21:17:08 -- scripts/common.sh@354 -- # echo 2 00:09:44.751 21:17:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:44.751 21:17:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:44.751 21:17:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:44.751 21:17:08 -- scripts/common.sh@367 -- # return 0 00:09:44.751 21:17:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.751 21:17:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:44.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.751 --rc genhtml_branch_coverage=1 00:09:44.751 --rc genhtml_function_coverage=1 00:09:44.751 --rc genhtml_legend=1 00:09:44.751 --rc geninfo_all_blocks=1 00:09:44.751 --rc geninfo_unexecuted_blocks=1 00:09:44.751 00:09:44.751 ' 00:09:44.751 21:17:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:44.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.751 --rc genhtml_branch_coverage=1 00:09:44.751 --rc genhtml_function_coverage=1 00:09:44.751 --rc genhtml_legend=1 00:09:44.751 --rc geninfo_all_blocks=1 00:09:44.751 --rc geninfo_unexecuted_blocks=1 00:09:44.751 00:09:44.751 ' 00:09:44.751 21:17:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:44.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.751 --rc genhtml_branch_coverage=1 00:09:44.751 --rc genhtml_function_coverage=1 00:09:44.751 --rc genhtml_legend=1 00:09:44.751 --rc geninfo_all_blocks=1 00:09:44.751 --rc geninfo_unexecuted_blocks=1 00:09:44.751 00:09:44.751 ' 00:09:44.751 21:17:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:44.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.751 --rc genhtml_branch_coverage=1 00:09:44.751 --rc genhtml_function_coverage=1 00:09:44.751 --rc genhtml_legend=1 00:09:44.751 --rc geninfo_all_blocks=1 00:09:44.751 --rc geninfo_unexecuted_blocks=1 00:09:44.751 00:09:44.751 ' 00:09:44.751 21:17:08 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.751 21:17:08 -- nvmf/common.sh@7 -- # uname -s 00:09:44.751 21:17:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.751 21:17:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.751 21:17:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.751 21:17:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.751 21:17:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.751 21:17:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.751 21:17:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.751 21:17:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.751 21:17:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.751 21:17:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.751 21:17:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:09:44.751 21:17:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:09:44.751 21:17:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.751 21:17:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.751 21:17:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.751 21:17:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.751 21:17:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.751 21:17:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.751 21:17:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.751 21:17:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.751 21:17:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.751 21:17:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.751 21:17:08 -- paths/export.sh@5 -- # export PATH 00:09:44.751 21:17:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.751 21:17:08 -- nvmf/common.sh@46 -- # : 0 00:09:44.751 21:17:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:44.751 21:17:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:44.751 21:17:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:44.751 21:17:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.751 21:17:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.751 21:17:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:44.751 21:17:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:44.752 21:17:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:44.752 21:17:08 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.752 21:17:08 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.752 21:17:08 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:44.752 21:17:08 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.752 21:17:08 -- target/multipath.sh@43 -- # nvmftestinit 00:09:44.752 21:17:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:44.752 21:17:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.752 21:17:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:44.752 21:17:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:44.752 21:17:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:44.752 21:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.752 21:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.752 21:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.752 21:17:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:44.752 21:17:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:44.752 21:17:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:44.752 21:17:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:44.752 21:17:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:44.752 21:17:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:44.752 21:17:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.752 21:17:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.752 21:17:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:44.752 21:17:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:44.752 21:17:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.752 21:17:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.752 21:17:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.752 21:17:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.752 21:17:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.752 21:17:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.752 21:17:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.752 21:17:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.752 21:17:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:44.752 21:17:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:44.752 Cannot find device "nvmf_tgt_br" 00:09:44.752 21:17:08 -- nvmf/common.sh@154 -- # true 00:09:44.752 21:17:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.752 Cannot find device "nvmf_tgt_br2" 00:09:45.010 21:17:08 -- nvmf/common.sh@155 -- # true 00:09:45.010 21:17:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:45.010 21:17:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:45.010 Cannot find device "nvmf_tgt_br" 00:09:45.010 21:17:08 -- nvmf/common.sh@157 -- # true 00:09:45.010 21:17:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:45.010 Cannot find device "nvmf_tgt_br2" 00:09:45.010 21:17:08 -- nvmf/common.sh@158 -- # true 00:09:45.010 21:17:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:45.010 21:17:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:45.010 21:17:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.010 21:17:08 -- nvmf/common.sh@161 -- # true 00:09:45.010 21:17:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.010 21:17:08 -- nvmf/common.sh@162 -- # true 00:09:45.010 21:17:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.010 21:17:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.010 21:17:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.010 21:17:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.010 21:17:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.010 21:17:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.010 21:17:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.010 21:17:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.010 21:17:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.010 21:17:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:45.010 21:17:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:45.010 21:17:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:45.010 21:17:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:45.010 21:17:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.010 21:17:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.010 21:17:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.010 21:17:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:45.269 21:17:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:45.269 21:17:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.269 21:17:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.269 21:17:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.269 21:17:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.269 21:17:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.269 21:17:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:45.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:45.269 00:09:45.269 --- 10.0.0.2 ping statistics --- 00:09:45.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.269 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:45.269 21:17:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:45.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:45.269 00:09:45.269 --- 10.0.0.3 ping statistics --- 00:09:45.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.269 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:45.269 21:17:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:45.269 00:09:45.269 --- 10.0.0.1 ping statistics --- 00:09:45.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.269 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:45.269 21:17:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.269 21:17:08 -- nvmf/common.sh@421 -- # return 0 00:09:45.269 21:17:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:45.269 21:17:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.269 21:17:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:45.269 21:17:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:45.269 21:17:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.269 21:17:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:45.269 21:17:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:45.269 21:17:08 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:45.269 21:17:08 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:45.269 21:17:08 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:45.269 21:17:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:45.269 21:17:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.269 21:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.269 21:17:08 -- nvmf/common.sh@469 -- # nvmfpid=73848 00:09:45.269 21:17:08 -- nvmf/common.sh@470 -- # waitforlisten 73848 00:09:45.269 21:17:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.269 21:17:08 -- common/autotest_common.sh@829 -- # '[' -z 73848 ']' 00:09:45.269 21:17:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.269 21:17:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.269 21:17:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.269 21:17:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.269 21:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.269 [2024-11-28 21:17:08.891640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:45.269 [2024-11-28 21:17:08.891738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.528 [2024-11-28 21:17:09.032371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.528 [2024-11-28 21:17:09.073530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:45.528 [2024-11-28 21:17:09.073958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.528 [2024-11-28 21:17:09.074149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.528 [2024-11-28 21:17:09.074277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.528 [2024-11-28 21:17:09.074387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.528 [2024-11-28 21:17:09.074640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.528 [2024-11-28 21:17:09.074788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.528 [2024-11-28 21:17:09.074796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.462 21:17:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.462 21:17:09 -- common/autotest_common.sh@862 -- # return 0 00:09:46.462 21:17:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:46.462 21:17:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:46.462 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:09:46.462 21:17:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.462 21:17:09 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.720 [2024-11-28 21:17:10.215698] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.720 21:17:10 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:46.979 Malloc0 00:09:46.979 21:17:10 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:47.237 21:17:10 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.237 21:17:10 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.495 [2024-11-28 21:17:11.185939] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.495 21:17:11 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:47.753 [2024-11-28 21:17:11.458258] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:47.754 21:17:11 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:48.012 21:17:11 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:48.272 21:17:11 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.272 21:17:11 -- common/autotest_common.sh@1187 -- # local i=0 00:09:48.272 21:17:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.272 21:17:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:48.272 21:17:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:50.203 21:17:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:50.203 21:17:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:50.203 21:17:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.203 21:17:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:50.203 21:17:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.203 21:17:13 -- common/autotest_common.sh@1197 -- # return 0 00:09:50.203 21:17:13 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:50.203 21:17:13 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:50.203 21:17:13 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:50.203 21:17:13 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:50.203 21:17:13 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:50.203 21:17:13 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:50.203 21:17:13 -- target/multipath.sh@38 -- # return 0 00:09:50.203 21:17:13 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:50.203 21:17:13 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:50.203 21:17:13 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:50.203 21:17:13 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:50.203 21:17:13 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:50.203 21:17:13 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:50.203 21:17:13 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:50.203 21:17:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:50.203 21:17:13 -- target/multipath.sh@22 -- # local timeout=20 00:09:50.203 21:17:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:50.203 21:17:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:50.203 21:17:13 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.203 21:17:13 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:50.203 21:17:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:50.203 21:17:13 -- target/multipath.sh@22 -- # local timeout=20 00:09:50.203 21:17:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:50.203 21:17:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:50.203 21:17:13 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.203 21:17:13 -- target/multipath.sh@85 -- # echo numa 00:09:50.203 21:17:13 -- target/multipath.sh@88 -- # fio_pid=73943 00:09:50.203 21:17:13 -- target/multipath.sh@90 -- # sleep 1 00:09:50.203 21:17:13 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:50.203 [global] 00:09:50.203 thread=1 00:09:50.203 invalidate=1 00:09:50.203 rw=randrw 00:09:50.203 time_based=1 00:09:50.203 runtime=6 00:09:50.203 ioengine=libaio 00:09:50.203 direct=1 00:09:50.203 bs=4096 00:09:50.203 iodepth=128 00:09:50.203 norandommap=0 00:09:50.203 numjobs=1 00:09:50.203 00:09:50.203 verify_dump=1 00:09:50.203 verify_backlog=512 00:09:50.203 verify_state_save=0 00:09:50.203 do_verify=1 00:09:50.203 verify=crc32c-intel 00:09:50.203 [job0] 00:09:50.203 filename=/dev/nvme0n1 00:09:50.203 Could not set queue depth (nvme0n1) 00:09:50.461 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.461 fio-3.35 00:09:50.461 Starting 1 thread 00:09:51.395 21:17:14 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:51.395 21:17:15 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:51.652 21:17:15 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:51.652 21:17:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:51.652 21:17:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:51.652 21:17:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:51.652 21:17:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:51.653 21:17:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:51.653 21:17:15 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:51.653 21:17:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:51.653 21:17:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:51.653 21:17:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:51.653 21:17:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:51.653 21:17:15 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:51.653 21:17:15 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:52.218 21:17:15 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:52.218 21:17:15 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:52.218 21:17:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:52.218 21:17:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:52.218 21:17:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.218 21:17:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.218 21:17:15 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:52.218 21:17:15 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:52.218 21:17:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:52.218 21:17:15 -- target/multipath.sh@22 -- # local timeout=20 00:09:52.218 21:17:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.218 21:17:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.218 21:17:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:52.218 21:17:15 -- target/multipath.sh@104 -- # wait 73943 00:09:57.506 00:09:57.506 job0: (groupid=0, jobs=1): err= 0: pid=73964: Thu Nov 28 21:17:20 2024 00:09:57.506 read: IOPS=11.0k, BW=43.0MiB/s (45.0MB/s)(258MiB/6006msec) 00:09:57.506 slat (usec): min=4, max=5415, avg=53.38, stdev=218.43 00:09:57.506 clat (usec): min=1380, max=14347, avg=7916.12, stdev=1428.74 00:09:57.506 lat (usec): min=1390, max=14362, avg=7969.50, stdev=1432.92 00:09:57.506 clat percentiles (usec): 00:09:57.506 | 1.00th=[ 4080], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7111], 00:09:57.506 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8029], 00:09:57.506 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[11207], 00:09:57.506 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13173], 99.95th=[13304], 00:09:57.506 | 99.99th=[13960] 00:09:57.506 bw ( KiB/s): min= 9896, max=29080, per=52.18%, avg=22953.45, stdev=5719.91, samples=11 00:09:57.506 iops : min= 2474, max= 7270, avg=5738.36, stdev=1429.98, samples=11 00:09:57.506 write: IOPS=6311, BW=24.7MiB/s (25.8MB/s)(135MiB/5469msec); 0 zone resets 00:09:57.506 slat (usec): min=14, max=2050, avg=62.29, stdev=148.33 00:09:57.506 clat (usec): min=2067, max=13834, avg=6926.26, stdev=1277.56 00:09:57.506 lat (usec): min=2112, max=14007, avg=6988.55, stdev=1282.10 00:09:57.507 clat percentiles (usec): 00:09:57.507 | 1.00th=[ 3195], 5.00th=[ 4047], 10.00th=[ 5080], 20.00th=[ 6390], 00:09:57.507 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:09:57.507 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8225], 00:09:57.507 | 99.00th=[10683], 99.50th=[11469], 99.90th=[12649], 99.95th=[13173], 00:09:57.507 | 99.99th=[13435] 00:09:57.507 bw ( KiB/s): min=10272, max=28480, per=90.95%, avg=22959.27, stdev=5537.38, samples=11 00:09:57.507 iops : min= 2568, max= 7120, avg=5739.82, stdev=1384.34, samples=11 00:09:57.507 lat (msec) : 2=0.05%, 4=2.16%, 10=92.15%, 20=5.64% 00:09:57.507 cpu : usr=5.50%, sys=22.66%, ctx=5698, majf=0, minf=90 00:09:57.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:57.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.507 issued rwts: total=66043,34516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.507 00:09:57.507 Run status group 0 (all jobs): 00:09:57.507 READ: bw=43.0MiB/s (45.0MB/s), 43.0MiB/s-43.0MiB/s (45.0MB/s-45.0MB/s), io=258MiB (271MB), run=6006-6006msec 00:09:57.507 WRITE: bw=24.7MiB/s (25.8MB/s), 24.7MiB/s-24.7MiB/s (25.8MB/s-25.8MB/s), io=135MiB (141MB), run=5469-5469msec 00:09:57.507 00:09:57.507 Disk stats (read/write): 00:09:57.507 nvme0n1: ios=65107/33881, merge=0/0, ticks=490544/218991, in_queue=709535, util=98.60% 00:09:57.507 21:17:20 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:57.507 21:17:20 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:57.507 21:17:20 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:57.508 21:17:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:57.508 21:17:20 -- target/multipath.sh@22 -- # local timeout=20 00:09:57.508 21:17:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.508 21:17:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.508 21:17:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:57.508 21:17:20 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:57.508 21:17:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:57.508 21:17:20 -- target/multipath.sh@22 -- # local timeout=20 00:09:57.508 21:17:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.508 21:17:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.508 21:17:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:57.508 21:17:20 -- target/multipath.sh@113 -- # echo round-robin 00:09:57.508 21:17:20 -- target/multipath.sh@116 -- # fio_pid=74046 00:09:57.508 21:17:20 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:57.508 21:17:20 -- target/multipath.sh@118 -- # sleep 1 00:09:57.508 [global] 00:09:57.508 thread=1 00:09:57.508 invalidate=1 00:09:57.508 rw=randrw 00:09:57.508 time_based=1 00:09:57.508 runtime=6 00:09:57.508 ioengine=libaio 00:09:57.508 direct=1 00:09:57.508 bs=4096 00:09:57.508 iodepth=128 00:09:57.508 norandommap=0 00:09:57.508 numjobs=1 00:09:57.508 00:09:57.508 verify_dump=1 00:09:57.508 verify_backlog=512 00:09:57.508 verify_state_save=0 00:09:57.508 do_verify=1 00:09:57.508 verify=crc32c-intel 00:09:57.508 [job0] 00:09:57.508 filename=/dev/nvme0n1 00:09:57.508 Could not set queue depth (nvme0n1) 00:09:57.508 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.508 fio-3.35 00:09:57.508 Starting 1 thread 00:09:58.079 21:17:21 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:58.337 21:17:22 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:58.596 21:17:22 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:58.596 21:17:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:58.596 21:17:22 -- target/multipath.sh@22 -- # local timeout=20 00:09:58.596 21:17:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:58.596 21:17:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:58.596 21:17:22 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:58.596 21:17:22 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:58.596 21:17:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:58.596 21:17:22 -- target/multipath.sh@22 -- # local timeout=20 00:09:58.596 21:17:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:58.596 21:17:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:58.596 21:17:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:58.596 21:17:22 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:58.854 21:17:22 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:59.113 21:17:22 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:59.113 21:17:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:59.113 21:17:22 -- target/multipath.sh@22 -- # local timeout=20 00:09:59.113 21:17:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.113 21:17:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.113 21:17:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:59.113 21:17:22 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:59.113 21:17:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:59.113 21:17:22 -- target/multipath.sh@22 -- # local timeout=20 00:09:59.113 21:17:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.113 21:17:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.113 21:17:22 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:59.113 21:17:22 -- target/multipath.sh@132 -- # wait 74046 00:10:03.301 00:10:03.301 job0: (groupid=0, jobs=1): err= 0: pid=74067: Thu Nov 28 21:17:27 2024 00:10:03.301 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(278MiB/6002msec) 00:10:03.301 slat (usec): min=4, max=8030, avg=41.95, stdev=194.80 00:10:03.301 clat (usec): min=336, max=16345, avg=7372.50, stdev=1842.42 00:10:03.301 lat (usec): min=401, max=16360, avg=7414.46, stdev=1855.80 00:10:03.301 clat percentiles (usec): 00:10:03.301 | 1.00th=[ 3195], 5.00th=[ 4113], 10.00th=[ 4817], 20.00th=[ 5866], 00:10:03.301 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:10:03.301 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10683], 00:10:03.301 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13566], 99.95th=[14222], 00:10:03.301 | 99.99th=[15795] 00:10:03.301 bw ( KiB/s): min= 9488, max=39968, per=53.07%, avg=25130.18, stdev=8835.99, samples=11 00:10:03.301 iops : min= 2372, max= 9992, avg=6282.55, stdev=2209.00, samples=11 00:10:03.301 write: IOPS=6863, BW=26.8MiB/s (28.1MB/s)(146MiB/5450msec); 0 zone resets 00:10:03.301 slat (usec): min=15, max=2923, avg=55.52, stdev=129.61 00:10:03.301 clat (usec): min=1739, max=14013, avg=6358.68, stdev=1719.39 00:10:03.301 lat (usec): min=1775, max=14039, avg=6414.20, stdev=1733.37 00:10:03.301 clat percentiles (usec): 00:10:03.301 | 1.00th=[ 2704], 5.00th=[ 3326], 10.00th=[ 3720], 20.00th=[ 4490], 00:10:03.301 | 30.00th=[ 5407], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7177], 00:10:03.301 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8094], 95.00th=[ 8455], 00:10:03.302 | 99.00th=[10290], 99.50th=[11076], 99.90th=[12387], 99.95th=[12518], 00:10:03.302 | 99.99th=[13435] 00:10:03.302 bw ( KiB/s): min= 9784, max=39265, per=91.45%, avg=25107.73, stdev=8635.40, samples=11 00:10:03.302 iops : min= 2446, max= 9816, avg=6276.91, stdev=2158.81, samples=11 00:10:03.302 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:03.302 lat (msec) : 2=0.10%, 4=7.42%, 10=87.93%, 20=4.53% 00:10:03.302 cpu : usr=5.93%, sys=25.25%, ctx=5926, majf=0, minf=90 00:10:03.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.302 issued rwts: total=71046,37407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.302 00:10:03.302 Run status group 0 (all jobs): 00:10:03.302 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=278MiB (291MB), run=6002-6002msec 00:10:03.302 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=146MiB (153MB), run=5450-5450msec 00:10:03.302 00:10:03.302 Disk stats (read/write): 00:10:03.302 nvme0n1: ios=70076/36995, merge=0/0, ticks=487348/217532, in_queue=704880, util=98.63% 00:10:03.302 21:17:27 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:03.561 21:17:27 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.561 21:17:27 -- common/autotest_common.sh@1208 -- # local i=0 00:10:03.561 21:17:27 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:03.561 21:17:27 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.561 21:17:27 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:03.561 21:17:27 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.561 21:17:27 -- common/autotest_common.sh@1220 -- # return 0 00:10:03.561 21:17:27 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.820 21:17:27 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:03.820 21:17:27 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:03.820 21:17:27 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:03.820 21:17:27 -- target/multipath.sh@144 -- # nvmftestfini 00:10:03.820 21:17:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:03.820 21:17:27 -- nvmf/common.sh@116 -- # sync 00:10:03.820 21:17:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:03.820 21:17:27 -- nvmf/common.sh@119 -- # set +e 00:10:03.820 21:17:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:03.820 21:17:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:03.820 rmmod nvme_tcp 00:10:03.820 rmmod nvme_fabrics 00:10:03.820 rmmod nvme_keyring 00:10:03.820 21:17:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:03.820 21:17:27 -- nvmf/common.sh@123 -- # set -e 00:10:03.820 21:17:27 -- nvmf/common.sh@124 -- # return 0 00:10:03.820 21:17:27 -- nvmf/common.sh@477 -- # '[' -n 73848 ']' 00:10:03.820 21:17:27 -- nvmf/common.sh@478 -- # killprocess 73848 00:10:03.820 21:17:27 -- common/autotest_common.sh@936 -- # '[' -z 73848 ']' 00:10:03.820 21:17:27 -- common/autotest_common.sh@940 -- # kill -0 73848 00:10:03.820 21:17:27 -- common/autotest_common.sh@941 -- # uname 00:10:03.820 21:17:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.820 21:17:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73848 00:10:03.820 killing process with pid 73848 00:10:03.820 21:17:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:03.820 21:17:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:03.820 21:17:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73848' 00:10:03.820 21:17:27 -- common/autotest_common.sh@955 -- # kill 73848 00:10:03.821 21:17:27 -- common/autotest_common.sh@960 -- # wait 73848 00:10:04.080 21:17:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:04.080 21:17:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:04.080 21:17:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:04.080 21:17:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.080 21:17:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:04.080 21:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.080 21:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.080 21:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.080 21:17:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:04.080 ************************************ 00:10:04.080 END TEST nvmf_multipath 00:10:04.080 ************************************ 00:10:04.080 00:10:04.080 real 0m19.442s 00:10:04.080 user 1m12.795s 00:10:04.080 sys 0m10.149s 00:10:04.080 21:17:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.080 21:17:27 -- common/autotest_common.sh@10 -- # set +x 00:10:04.080 21:17:27 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:04.080 21:17:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:04.080 21:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.080 21:17:27 -- common/autotest_common.sh@10 -- # set +x 00:10:04.080 ************************************ 00:10:04.080 START TEST nvmf_zcopy 00:10:04.080 ************************************ 00:10:04.080 21:17:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:04.080 * Looking for test storage... 00:10:04.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.080 21:17:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:04.354 21:17:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:04.354 21:17:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:04.354 21:17:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:04.354 21:17:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:04.354 21:17:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:04.354 21:17:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:04.354 21:17:27 -- scripts/common.sh@335 -- # IFS=.-: 00:10:04.354 21:17:27 -- scripts/common.sh@335 -- # read -ra ver1 00:10:04.354 21:17:27 -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.354 21:17:27 -- scripts/common.sh@336 -- # read -ra ver2 00:10:04.354 21:17:27 -- scripts/common.sh@337 -- # local 'op=<' 00:10:04.354 21:17:27 -- scripts/common.sh@339 -- # ver1_l=2 00:10:04.354 21:17:27 -- scripts/common.sh@340 -- # ver2_l=1 00:10:04.354 21:17:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:04.354 21:17:27 -- scripts/common.sh@343 -- # case "$op" in 00:10:04.354 21:17:27 -- scripts/common.sh@344 -- # : 1 00:10:04.354 21:17:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:04.354 21:17:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.354 21:17:27 -- scripts/common.sh@364 -- # decimal 1 00:10:04.354 21:17:27 -- scripts/common.sh@352 -- # local d=1 00:10:04.354 21:17:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.354 21:17:27 -- scripts/common.sh@354 -- # echo 1 00:10:04.354 21:17:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:04.354 21:17:27 -- scripts/common.sh@365 -- # decimal 2 00:10:04.354 21:17:27 -- scripts/common.sh@352 -- # local d=2 00:10:04.354 21:17:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.354 21:17:27 -- scripts/common.sh@354 -- # echo 2 00:10:04.354 21:17:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:04.354 21:17:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:04.354 21:17:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:04.354 21:17:27 -- scripts/common.sh@367 -- # return 0 00:10:04.354 21:17:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.354 21:17:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.354 --rc genhtml_branch_coverage=1 00:10:04.354 --rc genhtml_function_coverage=1 00:10:04.354 --rc genhtml_legend=1 00:10:04.354 --rc geninfo_all_blocks=1 00:10:04.354 --rc geninfo_unexecuted_blocks=1 00:10:04.354 00:10:04.354 ' 00:10:04.354 21:17:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.354 --rc genhtml_branch_coverage=1 00:10:04.354 --rc genhtml_function_coverage=1 00:10:04.354 --rc genhtml_legend=1 00:10:04.354 --rc geninfo_all_blocks=1 00:10:04.354 --rc geninfo_unexecuted_blocks=1 00:10:04.354 00:10:04.354 ' 00:10:04.354 21:17:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.354 --rc genhtml_branch_coverage=1 00:10:04.354 --rc genhtml_function_coverage=1 00:10:04.354 --rc genhtml_legend=1 00:10:04.354 --rc geninfo_all_blocks=1 00:10:04.354 --rc geninfo_unexecuted_blocks=1 00:10:04.354 00:10:04.354 ' 00:10:04.354 21:17:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.354 --rc genhtml_branch_coverage=1 00:10:04.354 --rc genhtml_function_coverage=1 00:10:04.354 --rc genhtml_legend=1 00:10:04.354 --rc geninfo_all_blocks=1 00:10:04.354 --rc geninfo_unexecuted_blocks=1 00:10:04.354 00:10:04.354 ' 00:10:04.354 21:17:27 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.354 21:17:27 -- nvmf/common.sh@7 -- # uname -s 00:10:04.354 21:17:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.354 21:17:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.354 21:17:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.354 21:17:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.354 21:17:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.354 21:17:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.354 21:17:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.354 21:17:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.354 21:17:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.354 21:17:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.354 21:17:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:04.354 21:17:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:04.354 21:17:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.354 21:17:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.354 21:17:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.354 21:17:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.354 21:17:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.354 21:17:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.354 21:17:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.354 21:17:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.354 21:17:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.354 21:17:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.354 21:17:27 -- paths/export.sh@5 -- # export PATH 00:10:04.354 21:17:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.354 21:17:27 -- nvmf/common.sh@46 -- # : 0 00:10:04.354 21:17:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:04.354 21:17:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:04.354 21:17:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:04.354 21:17:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.354 21:17:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.354 21:17:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:04.354 21:17:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:04.354 21:17:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:04.354 21:17:27 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:04.354 21:17:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:04.354 21:17:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.354 21:17:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:04.354 21:17:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:04.354 21:17:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:04.354 21:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.354 21:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.354 21:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.354 21:17:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:04.354 21:17:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:04.354 21:17:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:04.354 21:17:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:04.354 21:17:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:04.354 21:17:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:04.354 21:17:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.354 21:17:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.354 21:17:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:04.354 21:17:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:04.354 21:17:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.354 21:17:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.354 21:17:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.354 21:17:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.355 21:17:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.355 21:17:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.355 21:17:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.355 21:17:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.355 21:17:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:04.355 21:17:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:04.355 Cannot find device "nvmf_tgt_br" 00:10:04.355 21:17:27 -- nvmf/common.sh@154 -- # true 00:10:04.355 21:17:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.355 Cannot find device "nvmf_tgt_br2" 00:10:04.355 21:17:27 -- nvmf/common.sh@155 -- # true 00:10:04.355 21:17:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:04.355 21:17:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:04.355 Cannot find device "nvmf_tgt_br" 00:10:04.355 21:17:28 -- nvmf/common.sh@157 -- # true 00:10:04.355 21:17:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:04.355 Cannot find device "nvmf_tgt_br2" 00:10:04.355 21:17:28 -- nvmf/common.sh@158 -- # true 00:10:04.355 21:17:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:04.355 21:17:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:04.669 21:17:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.669 21:17:28 -- nvmf/common.sh@161 -- # true 00:10:04.669 21:17:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.669 21:17:28 -- nvmf/common.sh@162 -- # true 00:10:04.669 21:17:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.669 21:17:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.669 21:17:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.669 21:17:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.669 21:17:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.669 21:17:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.669 21:17:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.669 21:17:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:04.669 21:17:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:04.669 21:17:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:04.669 21:17:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:04.669 21:17:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:04.669 21:17:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:04.669 21:17:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.669 21:17:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.669 21:17:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.669 21:17:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:04.669 21:17:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:04.669 21:17:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.669 21:17:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.669 21:17:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.669 21:17:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.669 21:17:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.669 21:17:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:04.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:10:04.669 00:10:04.669 --- 10.0.0.2 ping statistics --- 00:10:04.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.669 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:04.669 21:17:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:04.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:10:04.669 00:10:04.669 --- 10.0.0.3 ping statistics --- 00:10:04.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.669 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:04.669 21:17:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:04.669 00:10:04.670 --- 10.0.0.1 ping statistics --- 00:10:04.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.670 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:04.670 21:17:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.670 21:17:28 -- nvmf/common.sh@421 -- # return 0 00:10:04.670 21:17:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:04.670 21:17:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.670 21:17:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:04.670 21:17:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:04.670 21:17:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.670 21:17:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:04.670 21:17:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:04.670 21:17:28 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:04.670 21:17:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:04.670 21:17:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.670 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:04.670 21:17:28 -- nvmf/common.sh@469 -- # nvmfpid=74323 00:10:04.670 21:17:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.670 21:17:28 -- nvmf/common.sh@470 -- # waitforlisten 74323 00:10:04.670 21:17:28 -- common/autotest_common.sh@829 -- # '[' -z 74323 ']' 00:10:04.670 21:17:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.670 21:17:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.670 21:17:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.670 21:17:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.670 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:04.670 [2024-11-28 21:17:28.371192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:04.670 [2024-11-28 21:17:28.371325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.930 [2024-11-28 21:17:28.512618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.930 [2024-11-28 21:17:28.545930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.930 [2024-11-28 21:17:28.546149] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.930 [2024-11-28 21:17:28.546163] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.930 [2024-11-28 21:17:28.546171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.930 [2024-11-28 21:17:28.546196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.930 21:17:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.930 21:17:28 -- common/autotest_common.sh@862 -- # return 0 00:10:04.930 21:17:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:04.930 21:17:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:04.930 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:04.930 21:17:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.930 21:17:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:04.930 21:17:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:04.930 21:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.930 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:04.930 [2024-11-28 21:17:28.667983] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.930 21:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.189 21:17:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:05.189 21:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.189 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:05.189 21:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.189 21:17:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.189 21:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.189 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:05.189 [2024-11-28 21:17:28.684114] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.189 21:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.189 21:17:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:05.189 21:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.189 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:05.189 21:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.189 21:17:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:05.189 21:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.189 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:05.189 malloc0 00:10:05.189 21:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.189 21:17:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:05.189 21:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.189 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:10:05.189 21:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.189 21:17:28 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:05.189 21:17:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:05.189 21:17:28 -- nvmf/common.sh@520 -- # config=() 00:10:05.189 21:17:28 -- nvmf/common.sh@520 -- # local subsystem config 00:10:05.189 21:17:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:05.189 21:17:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:05.189 { 00:10:05.189 "params": { 00:10:05.189 "name": "Nvme$subsystem", 00:10:05.189 "trtype": "$TEST_TRANSPORT", 00:10:05.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.189 "adrfam": "ipv4", 00:10:05.189 "trsvcid": "$NVMF_PORT", 00:10:05.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.189 "hdgst": ${hdgst:-false}, 00:10:05.189 "ddgst": ${ddgst:-false} 00:10:05.189 }, 00:10:05.189 "method": "bdev_nvme_attach_controller" 00:10:05.189 } 00:10:05.189 EOF 00:10:05.189 )") 00:10:05.189 21:17:28 -- nvmf/common.sh@542 -- # cat 00:10:05.189 21:17:28 -- nvmf/common.sh@544 -- # jq . 00:10:05.189 21:17:28 -- nvmf/common.sh@545 -- # IFS=, 00:10:05.189 21:17:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:05.189 "params": { 00:10:05.189 "name": "Nvme1", 00:10:05.189 "trtype": "tcp", 00:10:05.189 "traddr": "10.0.0.2", 00:10:05.189 "adrfam": "ipv4", 00:10:05.189 "trsvcid": "4420", 00:10:05.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.189 "hdgst": false, 00:10:05.189 "ddgst": false 00:10:05.189 }, 00:10:05.189 "method": "bdev_nvme_attach_controller" 00:10:05.189 }' 00:10:05.189 [2024-11-28 21:17:28.777137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:05.189 [2024-11-28 21:17:28.777247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74353 ] 00:10:05.189 [2024-11-28 21:17:28.917394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.448 [2024-11-28 21:17:28.950913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.448 Running I/O for 10 seconds... 00:10:15.425 00:10:15.425 Latency(us) 00:10:15.425 [2024-11-28T21:17:39.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.425 [2024-11-28T21:17:39.168Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:15.425 Verification LBA range: start 0x0 length 0x1000 00:10:15.425 Nvme1n1 : 10.01 9907.47 77.40 0.00 0.00 12885.72 1124.54 21209.83 00:10:15.425 [2024-11-28T21:17:39.168Z] =================================================================================================================== 00:10:15.425 [2024-11-28T21:17:39.168Z] Total : 9907.47 77.40 0.00 0.00 12885.72 1124.54 21209.83 00:10:15.686 21:17:39 -- target/zcopy.sh@39 -- # perfpid=74466 00:10:15.686 21:17:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:15.686 21:17:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:15.686 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:10:15.686 21:17:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:15.686 21:17:39 -- nvmf/common.sh@520 -- # config=() 00:10:15.686 21:17:39 -- nvmf/common.sh@520 -- # local subsystem config 00:10:15.686 21:17:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:15.686 21:17:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:15.686 { 00:10:15.686 "params": { 00:10:15.686 "name": "Nvme$subsystem", 00:10:15.686 "trtype": "$TEST_TRANSPORT", 00:10:15.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.686 "adrfam": "ipv4", 00:10:15.686 "trsvcid": "$NVMF_PORT", 00:10:15.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.686 "hdgst": ${hdgst:-false}, 00:10:15.686 "ddgst": ${ddgst:-false} 00:10:15.686 }, 00:10:15.686 "method": "bdev_nvme_attach_controller" 00:10:15.686 } 00:10:15.686 EOF 00:10:15.686 )") 00:10:15.686 [2024-11-28 21:17:39.242701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.242743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 21:17:39 -- nvmf/common.sh@542 -- # cat 00:10:15.686 21:17:39 -- nvmf/common.sh@544 -- # jq . 00:10:15.686 21:17:39 -- nvmf/common.sh@545 -- # IFS=, 00:10:15.686 21:17:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:15.686 "params": { 00:10:15.686 "name": "Nvme1", 00:10:15.686 "trtype": "tcp", 00:10:15.686 "traddr": "10.0.0.2", 00:10:15.686 "adrfam": "ipv4", 00:10:15.686 "trsvcid": "4420", 00:10:15.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.686 "hdgst": false, 00:10:15.686 "ddgst": false 00:10:15.686 }, 00:10:15.686 "method": "bdev_nvme_attach_controller" 00:10:15.686 }' 00:10:15.686 [2024-11-28 21:17:39.254654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.254682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.266655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.266680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.274218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:15.686 [2024-11-28 21:17:39.274287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74466 ] 00:10:15.686 [2024-11-28 21:17:39.278661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.278686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.290663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.290687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.302667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.302690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.314669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.314694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.326674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.326699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.338678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.338701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.350680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.350704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.362694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.362717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.374693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.374715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.386694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.386716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.398701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.398724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.409745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.686 [2024-11-28 21:17:39.410707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.410730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.686 [2024-11-28 21:17:39.422733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.686 [2024-11-28 21:17:39.422766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.434739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.434776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.444930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.946 [2024-11-28 21:17:39.446724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.446748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.458741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.458773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.470751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.470788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.482746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.482780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.494754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.494791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.506754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.506786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.518761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.518789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.530770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.530800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.542779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.542810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.554802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.554830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.566803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.566833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 Running I/O for 5 seconds... 00:10:15.946 [2024-11-28 21:17:39.585262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.585295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.599636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.599668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.615570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.615601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.634973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.635018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.648584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.648615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.663484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.663515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.946 [2024-11-28 21:17:39.673008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.946 [2024-11-28 21:17:39.673064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.690055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.690087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.705837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.705866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.724367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.724400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.738710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.738740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.754117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.754147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.773173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.773202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.787238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.787269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.802222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.802252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.813800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.813829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.831333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.831362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.845952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.845984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.855689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.855721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.872863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.872895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.889536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.889566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.906416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.906461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.923496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.923526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.206 [2024-11-28 21:17:39.939846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.206 [2024-11-28 21:17:39.939878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:39.956723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:39.956755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:39.973512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:39.973544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:39.989193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:39.989222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.005969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.006013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.023471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.023507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.038616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.038655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.050258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.050291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.066816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.066846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.083687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.083719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.099961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.099991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.116466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.116497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.133685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.133716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.149590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.149622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.167047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.167090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.181593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.181623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.465 [2024-11-28 21:17:40.196755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.465 [2024-11-28 21:17:40.196816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.212897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.212959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.229421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.229491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.247447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.247503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.261104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.261160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.276950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.277022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.295856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.295906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.310062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.310115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.325753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.325804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.345011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.345070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.359181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.359244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.376649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.376701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.391676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.391748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.401692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.401727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.417095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.417146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.433540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.433580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.451509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.451554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.724 [2024-11-28 21:17:40.466048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.724 [2024-11-28 21:17:40.466110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.475896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.475925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.490845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.490878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.508452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.508482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.523530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.523560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.533679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.533706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.548296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.548336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.565651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.565709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.581233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.581286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.599152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.599197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.614205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.614251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.630215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.630263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.648964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.649034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.663103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.663151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.680310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.680356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.694045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.694083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.984 [2024-11-28 21:17:40.709187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.984 [2024-11-28 21:17:40.709236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.728033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.728105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.742173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.742226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.757657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.757722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.775135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.775199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.790689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.790744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.808910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.808941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.824772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.824816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.841751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.841793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.858721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.858765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.875125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.875176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.891210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.891269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.909340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.909384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.924473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.924517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.934462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.934519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.950754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.950781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.968232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.968291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.243 [2024-11-28 21:17:40.983979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.243 [2024-11-28 21:17:40.984037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.002697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.002756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.017689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.017716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.027450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.027481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.042437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.042480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.054480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.054524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.070697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.070740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.087329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.087371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.103767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.103825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.120423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.120452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.136711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.136756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.155584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.155614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.169189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.169218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.185723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.185765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.200816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.200859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.211526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.211572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.227003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.227068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.503 [2024-11-28 21:17:41.244047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.503 [2024-11-28 21:17:41.244098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.259459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.259491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.269289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.269331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.284734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.284789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.302375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.302433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.318851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.318896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.335113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.335140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.352400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.352442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.368571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.368612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.385316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.385358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.402626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.402669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.417778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.417820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.428853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.428896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.444765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.444807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.462706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.462749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.762 [2024-11-28 21:17:41.477601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.762 [2024-11-28 21:17:41.477644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.763 [2024-11-28 21:17:41.489266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.763 [2024-11-28 21:17:41.489309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.505760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.505806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.519849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.519893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.535770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.535813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.552388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.552431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.569780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.569822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.585151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.585193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.602166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.602209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.619062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.619104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.637006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.637042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.653238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.653280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.670670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.670713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.685575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.685619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.700826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.700870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.718577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.718621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.734507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.734564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.022 [2024-11-28 21:17:41.751973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.022 [2024-11-28 21:17:41.752030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.767768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.767825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.784529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.784572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.800780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.800823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.818112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.818156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.833528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.833570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.852375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.852419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.865745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.865787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.880817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.880861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.892396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.892439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.907794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.907836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.925673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.925715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.941717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.941759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.957609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.957651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.968740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.968782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.281 [2024-11-28 21:17:41.985250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.281 [2024-11-28 21:17:41.985295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.282 [2024-11-28 21:17:41.999864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.282 [2024-11-28 21:17:41.999906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.282 [2024-11-28 21:17:42.015285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.282 [2024-11-28 21:17:42.015327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.033749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.033792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.049688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.049731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.067340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.067390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.081557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.081599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.098196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.098254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.113278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.113322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.124306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.124349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.541 [2024-11-28 21:17:42.140229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.541 [2024-11-28 21:17:42.140270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.156838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.156880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.173855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.173883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.191233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.191275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.207317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.207361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.225690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.225733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.239456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.239485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.254698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.254740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.542 [2024-11-28 21:17:42.272771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.542 [2024-11-28 21:17:42.272815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.288376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.288442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.305555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.305597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.320442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.320484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.336450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.336495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.353593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.353658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.368867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.368912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.380202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.380256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.396445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.396529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.413023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.413079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.430068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.430110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.446358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.446403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.464914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.464959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.479748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.479791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.489042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.489094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.504793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.504834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.515577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.515620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.802 [2024-11-28 21:17:42.532519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.802 [2024-11-28 21:17:42.532561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.547771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.547816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.565199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.565242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.581526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.581569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.598374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.598415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.614986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.615054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.631848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.631890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.649774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.649840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.664668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.664732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.676646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.676694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.692714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.692757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.708662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.708704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.724662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.724706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.742366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.742409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.758374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.758401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.776638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.776665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.062 [2024-11-28 21:17:42.790934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.062 [2024-11-28 21:17:42.790979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.808388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.808418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.823129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.823171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.834205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.834247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.850025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.850082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.867355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.867420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.884196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.884238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.901273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.901315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.918324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.918365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.935567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.935611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.951409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.951436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.968923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.968966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:42.984706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:42.984747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:43.002925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:43.002967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:43.018504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:43.018546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:43.034024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:43.034095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:43.045324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:43.045368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.322 [2024-11-28 21:17:43.061388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.322 [2024-11-28 21:17:43.061447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.077957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.077999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.096263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.096305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.110875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.110918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.127047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.127088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.145039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.145091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.159441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.159484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.174204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.174261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.190526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.190569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.205715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.205758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.220856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.220897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.238651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.238693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.253067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.253110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.269045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.269097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.285364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.285407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.302522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.302579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.582 [2024-11-28 21:17:43.319311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.582 [2024-11-28 21:17:43.319353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.333357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.333416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.349338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.349381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.366023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.366099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.382605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.382647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.400295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.400337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.415572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.415615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.426698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.426724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.442664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.442706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.458305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.458332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.475969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.475998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.492958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.492985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.508501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.508544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.526160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.526215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.541758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.541800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.558691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.558732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.841 [2024-11-28 21:17:43.575136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.841 [2024-11-28 21:17:43.575163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.591643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.591697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.610752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.610796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.624843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.624887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.641718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.641762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.658132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.658175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.675960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.676003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.691432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.691480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.708455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.708505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.724287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.724337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.735764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.735847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.752504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.752547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.767825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.767877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.787638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.787671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.802895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.802927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.820002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.820056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.105 [2024-11-28 21:17:43.835804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.105 [2024-11-28 21:17:43.835836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.852208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.852240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.869905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.869947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.884360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.884402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.900595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.900637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.916414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.916455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.927948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.927988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.943907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.943947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.960364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.960435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.977149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.977232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:43.993608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:43.993666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.010494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.010549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.027825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.027854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.043201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.043230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.060879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.060907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.076297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.076323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.093861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.093889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.424 [2024-11-28 21:17:44.109414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.424 [2024-11-28 21:17:44.109442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.126701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.126729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.144247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.144274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.159921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.159948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.177528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.177556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.193474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.193502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.211549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.211580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.229008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.229046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.245051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.245089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.263077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.263104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.277539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.277569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.294591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.294618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.310677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.310707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.327619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.327650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.344424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.344452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.362004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.362074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.377711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.377738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.395810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.395837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.684 [2024-11-28 21:17:44.411324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.684 [2024-11-28 21:17:44.411353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.430611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.430641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.444849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.444876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.460711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.460738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.476806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.476837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.493170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.493197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.511273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.511300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.526431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.526474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.537730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.537758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.554113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.554140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.569533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.569578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.579727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.579771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 00:10:20.944 Latency(us) 00:10:20.944 [2024-11-28T21:17:44.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.944 [2024-11-28T21:17:44.687Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:20.944 Nvme1n1 : 5.01 12748.37 99.60 0.00 0.00 10028.41 2383.13 17873.45 00:10:20.944 [2024-11-28T21:17:44.687Z] =================================================================================================================== 00:10:20.944 [2024-11-28T21:17:44.687Z] Total : 12748.37 99.60 0.00 0.00 10028.41 2383.13 17873.45 00:10:20.944 [2024-11-28 21:17:44.589902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.589946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.601945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.602004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.613955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.614007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.625952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.626004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.637968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.638056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.649970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.650049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.661960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.661993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.673963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.674005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.944 [2024-11-28 21:17:44.685989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.944 [2024-11-28 21:17:44.686033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.204 [2024-11-28 21:17:44.697983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.204 [2024-11-28 21:17:44.698021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.204 [2024-11-28 21:17:44.709991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.204 [2024-11-28 21:17:44.710051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.204 [2024-11-28 21:17:44.721991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.204 [2024-11-28 21:17:44.722024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.204 [2024-11-28 21:17:44.733991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.204 [2024-11-28 21:17:44.734023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.204 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74466) - No such process 00:10:21.204 21:17:44 -- target/zcopy.sh@49 -- # wait 74466 00:10:21.204 21:17:44 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.204 21:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.204 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:10:21.204 21:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.204 21:17:44 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:21.204 21:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.204 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:10:21.204 delay0 00:10:21.204 21:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.204 21:17:44 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:21.204 21:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.204 21:17:44 -- common/autotest_common.sh@10 -- # set +x 00:10:21.204 21:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.204 21:17:44 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:21.204 [2024-11-28 21:17:44.934307] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:27.778 Initializing NVMe Controllers 00:10:27.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:27.778 Initialization complete. Launching workers. 00:10:27.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 50 00:10:27.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 337, failed to submit 33 00:10:27.778 success 203, unsuccess 134, failed 0 00:10:27.778 21:17:50 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:27.778 21:17:50 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:27.778 21:17:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:27.778 21:17:50 -- nvmf/common.sh@116 -- # sync 00:10:27.778 21:17:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:27.778 21:17:51 -- nvmf/common.sh@119 -- # set +e 00:10:27.778 21:17:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:27.778 21:17:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:27.778 rmmod nvme_tcp 00:10:27.778 rmmod nvme_fabrics 00:10:27.778 rmmod nvme_keyring 00:10:27.778 21:17:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:27.778 21:17:51 -- nvmf/common.sh@123 -- # set -e 00:10:27.778 21:17:51 -- nvmf/common.sh@124 -- # return 0 00:10:27.778 21:17:51 -- nvmf/common.sh@477 -- # '[' -n 74323 ']' 00:10:27.778 21:17:51 -- nvmf/common.sh@478 -- # killprocess 74323 00:10:27.778 21:17:51 -- common/autotest_common.sh@936 -- # '[' -z 74323 ']' 00:10:27.778 21:17:51 -- common/autotest_common.sh@940 -- # kill -0 74323 00:10:27.778 21:17:51 -- common/autotest_common.sh@941 -- # uname 00:10:27.778 21:17:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.778 21:17:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74323 00:10:27.778 21:17:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:27.778 21:17:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:27.778 killing process with pid 74323 00:10:27.778 21:17:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74323' 00:10:27.778 21:17:51 -- common/autotest_common.sh@955 -- # kill 74323 00:10:27.778 21:17:51 -- common/autotest_common.sh@960 -- # wait 74323 00:10:27.778 21:17:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:27.778 21:17:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:27.778 21:17:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:27.778 21:17:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.778 21:17:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:27.778 21:17:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.778 21:17:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.778 21:17:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.778 21:17:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:27.778 00:10:27.778 real 0m23.539s 00:10:27.778 user 0m38.913s 00:10:27.778 sys 0m6.466s 00:10:27.778 21:17:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:27.778 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:10:27.778 ************************************ 00:10:27.778 END TEST nvmf_zcopy 00:10:27.778 ************************************ 00:10:27.778 21:17:51 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.778 21:17:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:27.778 21:17:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.778 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:10:27.778 ************************************ 00:10:27.778 START TEST nvmf_nmic 00:10:27.778 ************************************ 00:10:27.778 21:17:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.778 * Looking for test storage... 00:10:27.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.778 21:17:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:27.778 21:17:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:27.778 21:17:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:27.778 21:17:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:27.778 21:17:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:27.778 21:17:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:27.778 21:17:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:27.778 21:17:51 -- scripts/common.sh@335 -- # IFS=.-: 00:10:27.778 21:17:51 -- scripts/common.sh@335 -- # read -ra ver1 00:10:27.778 21:17:51 -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.778 21:17:51 -- scripts/common.sh@336 -- # read -ra ver2 00:10:27.778 21:17:51 -- scripts/common.sh@337 -- # local 'op=<' 00:10:27.778 21:17:51 -- scripts/common.sh@339 -- # ver1_l=2 00:10:27.778 21:17:51 -- scripts/common.sh@340 -- # ver2_l=1 00:10:27.778 21:17:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:27.778 21:17:51 -- scripts/common.sh@343 -- # case "$op" in 00:10:27.778 21:17:51 -- scripts/common.sh@344 -- # : 1 00:10:27.778 21:17:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:27.778 21:17:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.778 21:17:51 -- scripts/common.sh@364 -- # decimal 1 00:10:27.778 21:17:51 -- scripts/common.sh@352 -- # local d=1 00:10:27.778 21:17:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.778 21:17:51 -- scripts/common.sh@354 -- # echo 1 00:10:28.038 21:17:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:28.038 21:17:51 -- scripts/common.sh@365 -- # decimal 2 00:10:28.038 21:17:51 -- scripts/common.sh@352 -- # local d=2 00:10:28.038 21:17:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.038 21:17:51 -- scripts/common.sh@354 -- # echo 2 00:10:28.038 21:17:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:28.038 21:17:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:28.038 21:17:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:28.038 21:17:51 -- scripts/common.sh@367 -- # return 0 00:10:28.038 21:17:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.038 21:17:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:28.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.038 --rc genhtml_branch_coverage=1 00:10:28.038 --rc genhtml_function_coverage=1 00:10:28.038 --rc genhtml_legend=1 00:10:28.038 --rc geninfo_all_blocks=1 00:10:28.038 --rc geninfo_unexecuted_blocks=1 00:10:28.038 00:10:28.038 ' 00:10:28.038 21:17:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:28.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.038 --rc genhtml_branch_coverage=1 00:10:28.038 --rc genhtml_function_coverage=1 00:10:28.038 --rc genhtml_legend=1 00:10:28.038 --rc geninfo_all_blocks=1 00:10:28.038 --rc geninfo_unexecuted_blocks=1 00:10:28.038 00:10:28.038 ' 00:10:28.038 21:17:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:28.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.038 --rc genhtml_branch_coverage=1 00:10:28.038 --rc genhtml_function_coverage=1 00:10:28.038 --rc genhtml_legend=1 00:10:28.038 --rc geninfo_all_blocks=1 00:10:28.038 --rc geninfo_unexecuted_blocks=1 00:10:28.039 00:10:28.039 ' 00:10:28.039 21:17:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:28.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.039 --rc genhtml_branch_coverage=1 00:10:28.039 --rc genhtml_function_coverage=1 00:10:28.039 --rc genhtml_legend=1 00:10:28.039 --rc geninfo_all_blocks=1 00:10:28.039 --rc geninfo_unexecuted_blocks=1 00:10:28.039 00:10:28.039 ' 00:10:28.039 21:17:51 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.039 21:17:51 -- nvmf/common.sh@7 -- # uname -s 00:10:28.039 21:17:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.039 21:17:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.039 21:17:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.039 21:17:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.039 21:17:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.039 21:17:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.039 21:17:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.039 21:17:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.039 21:17:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.039 21:17:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.039 21:17:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:28.039 21:17:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:28.039 21:17:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.039 21:17:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.039 21:17:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.039 21:17:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.039 21:17:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.039 21:17:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.039 21:17:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.039 21:17:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.039 21:17:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.039 21:17:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.039 21:17:51 -- paths/export.sh@5 -- # export PATH 00:10:28.039 21:17:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.039 21:17:51 -- nvmf/common.sh@46 -- # : 0 00:10:28.039 21:17:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:28.039 21:17:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:28.039 21:17:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:28.039 21:17:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.039 21:17:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.039 21:17:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:28.039 21:17:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:28.039 21:17:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:28.039 21:17:51 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.039 21:17:51 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.039 21:17:51 -- target/nmic.sh@14 -- # nvmftestinit 00:10:28.039 21:17:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:28.039 21:17:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.039 21:17:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:28.039 21:17:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:28.039 21:17:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:28.039 21:17:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.039 21:17:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.039 21:17:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.039 21:17:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:28.039 21:17:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:28.039 21:17:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:28.039 21:17:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:28.039 21:17:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:28.039 21:17:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:28.039 21:17:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.039 21:17:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.039 21:17:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:28.039 21:17:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:28.039 21:17:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.039 21:17:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.039 21:17:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.039 21:17:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.039 21:17:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.039 21:17:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.039 21:17:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.039 21:17:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.039 21:17:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:28.039 21:17:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:28.039 Cannot find device "nvmf_tgt_br" 00:10:28.039 21:17:51 -- nvmf/common.sh@154 -- # true 00:10:28.039 21:17:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.039 Cannot find device "nvmf_tgt_br2" 00:10:28.039 21:17:51 -- nvmf/common.sh@155 -- # true 00:10:28.039 21:17:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:28.039 21:17:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:28.039 Cannot find device "nvmf_tgt_br" 00:10:28.039 21:17:51 -- nvmf/common.sh@157 -- # true 00:10:28.039 21:17:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:28.039 Cannot find device "nvmf_tgt_br2" 00:10:28.039 21:17:51 -- nvmf/common.sh@158 -- # true 00:10:28.039 21:17:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:28.039 21:17:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:28.039 21:17:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.039 21:17:51 -- nvmf/common.sh@161 -- # true 00:10:28.039 21:17:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.039 21:17:51 -- nvmf/common.sh@162 -- # true 00:10:28.039 21:17:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.039 21:17:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.039 21:17:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.039 21:17:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.039 21:17:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.039 21:17:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.039 21:17:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.039 21:17:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:28.039 21:17:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:28.299 21:17:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:28.299 21:17:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:28.299 21:17:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:28.299 21:17:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:28.299 21:17:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.299 21:17:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.299 21:17:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.299 21:17:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:28.299 21:17:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:28.299 21:17:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.299 21:17:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.299 21:17:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.299 21:17:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.299 21:17:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.299 21:17:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:28.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:10:28.299 00:10:28.299 --- 10.0.0.2 ping statistics --- 00:10:28.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.299 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:28.299 21:17:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:28.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:10:28.299 00:10:28.299 --- 10.0.0.3 ping statistics --- 00:10:28.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.299 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:28.299 21:17:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:28.299 00:10:28.299 --- 10.0.0.1 ping statistics --- 00:10:28.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.299 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:28.299 21:17:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.299 21:17:51 -- nvmf/common.sh@421 -- # return 0 00:10:28.299 21:17:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:28.299 21:17:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.299 21:17:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:28.299 21:17:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:28.299 21:17:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.299 21:17:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:28.299 21:17:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:28.299 21:17:51 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:28.299 21:17:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:28.299 21:17:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.299 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.299 21:17:51 -- nvmf/common.sh@469 -- # nvmfpid=74792 00:10:28.299 21:17:51 -- nvmf/common.sh@470 -- # waitforlisten 74792 00:10:28.299 21:17:51 -- common/autotest_common.sh@829 -- # '[' -z 74792 ']' 00:10:28.299 21:17:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.299 21:17:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.299 21:17:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.299 21:17:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.299 21:17:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.299 21:17:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.299 [2024-11-28 21:17:51.980715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:28.299 [2024-11-28 21:17:51.980811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.559 [2024-11-28 21:17:52.117983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.559 [2024-11-28 21:17:52.150834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:28.559 [2024-11-28 21:17:52.151019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.559 [2024-11-28 21:17:52.151059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.559 [2024-11-28 21:17:52.151085] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.559 [2024-11-28 21:17:52.151166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.559 [2024-11-28 21:17:52.151244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.559 [2024-11-28 21:17:52.151416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.559 [2024-11-28 21:17:52.151422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.559 21:17:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.559 21:17:52 -- common/autotest_common.sh@862 -- # return 0 00:10:28.559 21:17:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:28.559 21:17:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:28.559 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.559 21:17:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.559 21:17:52 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.559 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.559 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.559 [2024-11-28 21:17:52.273640] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.559 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.559 21:17:52 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.559 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.559 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.818 Malloc0 00:10:28.818 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.818 21:17:52 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.818 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.818 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.818 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.818 21:17:52 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.818 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.818 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.818 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.818 21:17:52 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.818 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.818 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.818 [2024-11-28 21:17:52.341633] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.818 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.818 test case1: single bdev can't be used in multiple subsystems 00:10:28.818 21:17:52 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:28.818 21:17:52 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:28.818 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 21:17:52 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:28.819 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 21:17:52 -- target/nmic.sh@28 -- # nmic_status=0 00:10:28.819 21:17:52 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:28.819 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 [2024-11-28 21:17:52.365496] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:28.819 [2024-11-28 21:17:52.365546] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:28.819 [2024-11-28 21:17:52.365555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.819 request: 00:10:28.819 { 00:10:28.819 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:28.819 "namespace": { 00:10:28.819 "bdev_name": "Malloc0" 00:10:28.819 }, 00:10:28.819 "method": "nvmf_subsystem_add_ns", 00:10:28.819 "req_id": 1 00:10:28.819 } 00:10:28.819 Got JSON-RPC error response 00:10:28.819 response: 00:10:28.819 { 00:10:28.819 "code": -32602, 00:10:28.819 "message": "Invalid parameters" 00:10:28.819 } 00:10:28.819 21:17:52 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:28.819 21:17:52 -- target/nmic.sh@29 -- # nmic_status=1 00:10:28.819 21:17:52 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:28.819 Adding namespace failed - expected result. 00:10:28.819 21:17:52 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:28.819 test case2: host connect to nvmf target in multiple paths 00:10:28.819 21:17:52 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:28.819 21:17:52 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:28.819 21:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 [2024-11-28 21:17:52.377617] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:28.819 21:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 21:17:52 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.819 21:17:52 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:29.079 21:17:52 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.079 21:17:52 -- common/autotest_common.sh@1187 -- # local i=0 00:10:29.079 21:17:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.079 21:17:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:29.079 21:17:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:30.994 21:17:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:30.994 21:17:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:30.994 21:17:54 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.994 21:17:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:30.994 21:17:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.994 21:17:54 -- common/autotest_common.sh@1197 -- # return 0 00:10:30.994 21:17:54 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:30.994 [global] 00:10:30.994 thread=1 00:10:30.994 invalidate=1 00:10:30.994 rw=write 00:10:30.994 time_based=1 00:10:30.994 runtime=1 00:10:30.994 ioengine=libaio 00:10:30.994 direct=1 00:10:30.994 bs=4096 00:10:30.994 iodepth=1 00:10:30.994 norandommap=0 00:10:30.994 numjobs=1 00:10:30.994 00:10:30.994 verify_dump=1 00:10:30.994 verify_backlog=512 00:10:30.994 verify_state_save=0 00:10:30.994 do_verify=1 00:10:30.994 verify=crc32c-intel 00:10:30.994 [job0] 00:10:30.994 filename=/dev/nvme0n1 00:10:30.994 Could not set queue depth (nvme0n1) 00:10:31.253 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.253 fio-3.35 00:10:31.253 Starting 1 thread 00:10:32.629 00:10:32.629 job0: (groupid=0, jobs=1): err= 0: pid=74875: Thu Nov 28 21:17:55 2024 00:10:32.629 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:32.629 slat (nsec): min=11577, max=57114, avg=14245.84, stdev=4288.40 00:10:32.629 clat (usec): min=129, max=267, avg=173.95, stdev=22.35 00:10:32.629 lat (usec): min=141, max=279, avg=188.20, stdev=22.96 00:10:32.629 clat percentiles (usec): 00:10:32.629 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:10:32.629 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:10:32.629 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 215], 00:10:32.629 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 260], 99.95th=[ 265], 00:10:32.629 | 99.99th=[ 269] 00:10:32.629 write: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:10:32.629 slat (usec): min=14, max=105, avg=22.69, stdev= 6.90 00:10:32.629 clat (usec): min=79, max=430, avg=108.90, stdev=20.28 00:10:32.629 lat (usec): min=97, max=449, avg=131.59, stdev=22.22 00:10:32.629 clat percentiles (usec): 00:10:32.629 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 93], 00:10:32.629 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 104], 60.00th=[ 110], 00:10:32.629 | 70.00th=[ 116], 80.00th=[ 125], 90.00th=[ 137], 95.00th=[ 149], 00:10:32.629 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 225], 00:10:32.629 | 99.99th=[ 433] 00:10:32.629 bw ( KiB/s): min=12384, max=12384, per=99.17%, avg=12384.00, stdev= 0.00, samples=1 00:10:32.629 iops : min= 3096, max= 3096, avg=3096.00, stdev= 0.00, samples=1 00:10:32.629 lat (usec) : 100=21.16%, 250=78.72%, 500=0.13% 00:10:32.629 cpu : usr=2.50%, sys=8.70%, ctx=6197, majf=0, minf=5 00:10:32.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.629 issued rwts: total=3072,3125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.629 00:10:32.629 Run status group 0 (all jobs): 00:10:32.629 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:32.629 WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:10:32.629 00:10:32.629 Disk stats (read/write): 00:10:32.629 nvme0n1: ios=2641/3072, merge=0/0, ticks=497/396, in_queue=893, util=91.28% 00:10:32.629 21:17:55 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:32.629 21:17:56 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.629 21:17:56 -- common/autotest_common.sh@1208 -- # local i=0 00:10:32.629 21:17:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:32.629 21:17:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.629 21:17:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:32.629 21:17:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.629 21:17:56 -- common/autotest_common.sh@1220 -- # return 0 00:10:32.629 21:17:56 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:32.629 21:17:56 -- target/nmic.sh@53 -- # nvmftestfini 00:10:32.629 21:17:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:32.629 21:17:56 -- nvmf/common.sh@116 -- # sync 00:10:32.629 21:17:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:32.629 21:17:56 -- nvmf/common.sh@119 -- # set +e 00:10:32.629 21:17:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:32.629 21:17:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:32.629 rmmod nvme_tcp 00:10:32.629 rmmod nvme_fabrics 00:10:32.629 rmmod nvme_keyring 00:10:32.629 21:17:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:32.629 21:17:56 -- nvmf/common.sh@123 -- # set -e 00:10:32.629 21:17:56 -- nvmf/common.sh@124 -- # return 0 00:10:32.629 21:17:56 -- nvmf/common.sh@477 -- # '[' -n 74792 ']' 00:10:32.629 21:17:56 -- nvmf/common.sh@478 -- # killprocess 74792 00:10:32.629 21:17:56 -- common/autotest_common.sh@936 -- # '[' -z 74792 ']' 00:10:32.629 21:17:56 -- common/autotest_common.sh@940 -- # kill -0 74792 00:10:32.629 21:17:56 -- common/autotest_common.sh@941 -- # uname 00:10:32.629 21:17:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:32.629 21:17:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74792 00:10:32.629 21:17:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:32.629 21:17:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:32.629 killing process with pid 74792 00:10:32.629 21:17:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74792' 00:10:32.629 21:17:56 -- common/autotest_common.sh@955 -- # kill 74792 00:10:32.629 21:17:56 -- common/autotest_common.sh@960 -- # wait 74792 00:10:32.629 21:17:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:32.629 21:17:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:32.629 21:17:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:32.629 21:17:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.629 21:17:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:32.629 21:17:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.629 21:17:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.629 21:17:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.889 21:17:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:32.889 00:10:32.889 real 0m5.049s 00:10:32.889 user 0m15.377s 00:10:32.889 sys 0m2.319s 00:10:32.889 21:17:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.889 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:10:32.889 ************************************ 00:10:32.889 END TEST nvmf_nmic 00:10:32.889 ************************************ 00:10:32.889 21:17:56 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:32.889 21:17:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.889 21:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.889 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:10:32.889 ************************************ 00:10:32.889 START TEST nvmf_fio_target 00:10:32.889 ************************************ 00:10:32.889 21:17:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:32.889 * Looking for test storage... 00:10:32.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.889 21:17:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:32.889 21:17:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:32.889 21:17:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:32.889 21:17:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:32.889 21:17:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:32.889 21:17:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:32.889 21:17:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:32.889 21:17:56 -- scripts/common.sh@335 -- # IFS=.-: 00:10:32.889 21:17:56 -- scripts/common.sh@335 -- # read -ra ver1 00:10:32.889 21:17:56 -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.889 21:17:56 -- scripts/common.sh@336 -- # read -ra ver2 00:10:32.889 21:17:56 -- scripts/common.sh@337 -- # local 'op=<' 00:10:32.889 21:17:56 -- scripts/common.sh@339 -- # ver1_l=2 00:10:32.889 21:17:56 -- scripts/common.sh@340 -- # ver2_l=1 00:10:32.889 21:17:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:32.889 21:17:56 -- scripts/common.sh@343 -- # case "$op" in 00:10:32.889 21:17:56 -- scripts/common.sh@344 -- # : 1 00:10:32.889 21:17:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:32.889 21:17:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.889 21:17:56 -- scripts/common.sh@364 -- # decimal 1 00:10:32.889 21:17:56 -- scripts/common.sh@352 -- # local d=1 00:10:32.889 21:17:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.889 21:17:56 -- scripts/common.sh@354 -- # echo 1 00:10:32.889 21:17:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:32.889 21:17:56 -- scripts/common.sh@365 -- # decimal 2 00:10:32.889 21:17:56 -- scripts/common.sh@352 -- # local d=2 00:10:32.889 21:17:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.889 21:17:56 -- scripts/common.sh@354 -- # echo 2 00:10:32.889 21:17:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:32.889 21:17:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:32.889 21:17:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:32.889 21:17:56 -- scripts/common.sh@367 -- # return 0 00:10:32.889 21:17:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.889 21:17:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:32.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.889 --rc genhtml_branch_coverage=1 00:10:32.889 --rc genhtml_function_coverage=1 00:10:32.889 --rc genhtml_legend=1 00:10:32.889 --rc geninfo_all_blocks=1 00:10:32.889 --rc geninfo_unexecuted_blocks=1 00:10:32.889 00:10:32.889 ' 00:10:32.889 21:17:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:32.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.889 --rc genhtml_branch_coverage=1 00:10:32.889 --rc genhtml_function_coverage=1 00:10:32.889 --rc genhtml_legend=1 00:10:32.889 --rc geninfo_all_blocks=1 00:10:32.889 --rc geninfo_unexecuted_blocks=1 00:10:32.889 00:10:32.889 ' 00:10:32.889 21:17:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:32.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.889 --rc genhtml_branch_coverage=1 00:10:32.889 --rc genhtml_function_coverage=1 00:10:32.889 --rc genhtml_legend=1 00:10:32.889 --rc geninfo_all_blocks=1 00:10:32.889 --rc geninfo_unexecuted_blocks=1 00:10:32.889 00:10:32.889 ' 00:10:32.889 21:17:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:32.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.889 --rc genhtml_branch_coverage=1 00:10:32.889 --rc genhtml_function_coverage=1 00:10:32.889 --rc genhtml_legend=1 00:10:32.889 --rc geninfo_all_blocks=1 00:10:32.889 --rc geninfo_unexecuted_blocks=1 00:10:32.889 00:10:32.889 ' 00:10:32.889 21:17:56 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.889 21:17:56 -- nvmf/common.sh@7 -- # uname -s 00:10:32.889 21:17:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.889 21:17:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.889 21:17:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.889 21:17:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.889 21:17:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.889 21:17:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.889 21:17:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.889 21:17:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.889 21:17:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.889 21:17:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.889 21:17:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:32.890 21:17:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:32.890 21:17:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.890 21:17:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.890 21:17:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.890 21:17:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.890 21:17:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.890 21:17:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.890 21:17:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.890 21:17:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.890 21:17:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.890 21:17:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.890 21:17:56 -- paths/export.sh@5 -- # export PATH 00:10:32.890 21:17:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.890 21:17:56 -- nvmf/common.sh@46 -- # : 0 00:10:32.890 21:17:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:32.890 21:17:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:32.890 21:17:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:32.890 21:17:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.890 21:17:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.890 21:17:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:32.890 21:17:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:32.890 21:17:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:32.890 21:17:56 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.890 21:17:56 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.890 21:17:56 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:32.890 21:17:56 -- target/fio.sh@16 -- # nvmftestinit 00:10:32.890 21:17:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:32.890 21:17:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.890 21:17:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:32.890 21:17:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:32.890 21:17:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:32.890 21:17:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.890 21:17:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.890 21:17:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.890 21:17:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:32.890 21:17:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:32.890 21:17:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:32.890 21:17:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:32.890 21:17:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:32.890 21:17:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:32.890 21:17:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.890 21:17:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.890 21:17:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.890 21:17:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:32.890 21:17:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.890 21:17:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.890 21:17:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.890 21:17:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.890 21:17:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.890 21:17:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.890 21:17:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.890 21:17:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.890 21:17:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:33.150 21:17:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:33.150 Cannot find device "nvmf_tgt_br" 00:10:33.150 21:17:56 -- nvmf/common.sh@154 -- # true 00:10:33.150 21:17:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:33.150 Cannot find device "nvmf_tgt_br2" 00:10:33.150 21:17:56 -- nvmf/common.sh@155 -- # true 00:10:33.150 21:17:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:33.150 21:17:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:33.150 Cannot find device "nvmf_tgt_br" 00:10:33.150 21:17:56 -- nvmf/common.sh@157 -- # true 00:10:33.150 21:17:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:33.150 Cannot find device "nvmf_tgt_br2" 00:10:33.150 21:17:56 -- nvmf/common.sh@158 -- # true 00:10:33.150 21:17:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:33.150 21:17:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:33.150 21:17:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.150 21:17:56 -- nvmf/common.sh@161 -- # true 00:10:33.150 21:17:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.150 21:17:56 -- nvmf/common.sh@162 -- # true 00:10:33.150 21:17:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.150 21:17:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.150 21:17:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.150 21:17:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:33.150 21:17:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:33.150 21:17:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:33.150 21:17:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:33.150 21:17:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:33.150 21:17:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:33.150 21:17:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:33.150 21:17:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:33.150 21:17:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:33.150 21:17:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:33.150 21:17:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:33.150 21:17:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:33.150 21:17:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:33.150 21:17:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:33.150 21:17:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:33.150 21:17:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:33.409 21:17:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:33.409 21:17:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:33.410 21:17:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:33.410 21:17:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:33.410 21:17:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:33.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:33.410 00:10:33.410 --- 10.0.0.2 ping statistics --- 00:10:33.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.410 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:33.410 21:17:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:33.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:10:33.410 00:10:33.410 --- 10.0.0.3 ping statistics --- 00:10:33.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.410 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:33.410 21:17:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:33.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:33.410 00:10:33.410 --- 10.0.0.1 ping statistics --- 00:10:33.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.410 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:33.410 21:17:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.410 21:17:56 -- nvmf/common.sh@421 -- # return 0 00:10:33.410 21:17:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:33.410 21:17:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.410 21:17:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:33.410 21:17:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:33.410 21:17:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.410 21:17:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:33.410 21:17:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:33.410 21:17:56 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:33.410 21:17:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:33.410 21:17:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:33.410 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 21:17:56 -- nvmf/common.sh@469 -- # nvmfpid=75059 00:10:33.410 21:17:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.410 21:17:56 -- nvmf/common.sh@470 -- # waitforlisten 75059 00:10:33.410 21:17:56 -- common/autotest_common.sh@829 -- # '[' -z 75059 ']' 00:10:33.410 21:17:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.410 21:17:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.410 21:17:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.410 21:17:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.410 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 [2024-11-28 21:17:57.019567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:33.410 [2024-11-28 21:17:57.019671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.669 [2024-11-28 21:17:57.158745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.669 [2024-11-28 21:17:57.191953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:33.669 [2024-11-28 21:17:57.192144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.669 [2024-11-28 21:17:57.192157] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.669 [2024-11-28 21:17:57.192166] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.669 [2024-11-28 21:17:57.192237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.669 [2024-11-28 21:17:57.192362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.669 [2024-11-28 21:17:57.192537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.669 [2024-11-28 21:17:57.192545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.669 21:17:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.669 21:17:57 -- common/autotest_common.sh@862 -- # return 0 00:10:33.669 21:17:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:33.669 21:17:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.669 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:10:33.669 21:17:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.669 21:17:57 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.927 [2024-11-28 21:17:57.520288] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.927 21:17:57 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.186 21:17:57 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:34.186 21:17:57 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.444 21:17:58 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:34.444 21:17:58 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.702 21:17:58 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:34.702 21:17:58 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.961 21:17:58 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:34.961 21:17:58 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:35.220 21:17:58 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.538 21:17:59 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:35.538 21:17:59 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.796 21:17:59 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:35.796 21:17:59 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.055 21:17:59 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:36.055 21:17:59 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:36.315 21:17:59 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.574 21:18:00 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:36.574 21:18:00 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.833 21:18:00 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:36.833 21:18:00 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:36.833 21:18:00 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.092 [2024-11-28 21:18:00.806449] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.092 21:18:00 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:37.660 21:18:01 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:37.919 21:18:01 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.919 21:18:01 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:37.919 21:18:01 -- common/autotest_common.sh@1187 -- # local i=0 00:10:37.919 21:18:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.919 21:18:01 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:37.919 21:18:01 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:37.919 21:18:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:39.821 21:18:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:39.821 21:18:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:39.821 21:18:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.080 21:18:03 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:40.080 21:18:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.080 21:18:03 -- common/autotest_common.sh@1197 -- # return 0 00:10:40.080 21:18:03 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.080 [global] 00:10:40.080 thread=1 00:10:40.080 invalidate=1 00:10:40.080 rw=write 00:10:40.080 time_based=1 00:10:40.080 runtime=1 00:10:40.080 ioengine=libaio 00:10:40.080 direct=1 00:10:40.080 bs=4096 00:10:40.080 iodepth=1 00:10:40.080 norandommap=0 00:10:40.080 numjobs=1 00:10:40.080 00:10:40.080 verify_dump=1 00:10:40.080 verify_backlog=512 00:10:40.080 verify_state_save=0 00:10:40.080 do_verify=1 00:10:40.080 verify=crc32c-intel 00:10:40.080 [job0] 00:10:40.080 filename=/dev/nvme0n1 00:10:40.080 [job1] 00:10:40.080 filename=/dev/nvme0n2 00:10:40.080 [job2] 00:10:40.080 filename=/dev/nvme0n3 00:10:40.080 [job3] 00:10:40.080 filename=/dev/nvme0n4 00:10:40.080 Could not set queue depth (nvme0n1) 00:10:40.080 Could not set queue depth (nvme0n2) 00:10:40.080 Could not set queue depth (nvme0n3) 00:10:40.080 Could not set queue depth (nvme0n4) 00:10:40.080 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.080 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.080 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.080 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.080 fio-3.35 00:10:40.080 Starting 4 threads 00:10:41.456 00:10:41.456 job0: (groupid=0, jobs=1): err= 0: pid=75237: Thu Nov 28 21:18:04 2024 00:10:41.456 read: IOPS=2973, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:10:41.456 slat (nsec): min=12127, max=49267, avg=14936.39, stdev=3129.52 00:10:41.456 clat (usec): min=132, max=1706, avg=165.03, stdev=32.56 00:10:41.457 lat (usec): min=146, max=1721, avg=179.97, stdev=32.61 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:41.457 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:10:41.457 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:10:41.457 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 293], 99.95th=[ 515], 00:10:41.457 | 99.99th=[ 1713] 00:10:41.457 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:41.457 slat (nsec): min=18318, max=87650, avg=23258.35, stdev=5623.47 00:10:41.457 clat (usec): min=94, max=1678, avg=124.59, stdev=31.10 00:10:41.457 lat (usec): min=114, max=1698, avg=147.85, stdev=31.66 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:10:41.457 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 126], 00:10:41.457 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 149], 00:10:41.457 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 206], 99.95th=[ 249], 00:10:41.457 | 99.99th=[ 1680] 00:10:41.457 bw ( KiB/s): min=12288, max=12288, per=25.05%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.457 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.457 lat (usec) : 100=0.35%, 250=99.52%, 500=0.08%, 750=0.02% 00:10:41.457 lat (msec) : 2=0.03% 00:10:41.457 cpu : usr=2.30%, sys=9.10%, ctx=6048, majf=0, minf=15 00:10:41.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 issued rwts: total=2976,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.457 job1: (groupid=0, jobs=1): err= 0: pid=75238: Thu Nov 28 21:18:04 2024 00:10:41.457 read: IOPS=3010, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec) 00:10:41.457 slat (nsec): min=11406, max=44997, avg=14274.18, stdev=3216.23 00:10:41.457 clat (usec): min=129, max=224, avg=164.27, stdev=13.66 00:10:41.457 lat (usec): min=141, max=239, avg=178.54, stdev=13.84 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:41.457 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:10:41.457 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:10:41.457 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 221], 99.95th=[ 221], 00:10:41.457 | 99.99th=[ 225] 00:10:41.457 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:41.457 slat (nsec): min=15715, max=85047, avg=22699.46, stdev=5886.66 00:10:41.457 clat (usec): min=89, max=441, avg=124.40, stdev=14.24 00:10:41.457 lat (usec): min=115, max=462, avg=147.10, stdev=15.36 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:10:41.457 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:10:41.457 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 149], 00:10:41.457 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 182], 00:10:41.457 | 99.99th=[ 441] 00:10:41.457 bw ( KiB/s): min=12288, max=12288, per=25.05%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.457 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.457 lat (usec) : 100=0.28%, 250=99.70%, 500=0.02% 00:10:41.457 cpu : usr=1.60%, sys=9.30%, ctx=6086, majf=0, minf=7 00:10:41.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 issued rwts: total=3014,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.457 job2: (groupid=0, jobs=1): err= 0: pid=75239: Thu Nov 28 21:18:04 2024 00:10:41.457 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:41.457 slat (nsec): min=12239, max=49646, avg=15331.99, stdev=3664.49 00:10:41.457 clat (usec): min=141, max=2604, avg=179.07, stdev=51.21 00:10:41.457 lat (usec): min=155, max=2620, avg=194.40, stdev=51.24 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:41.457 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:41.457 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:10:41.457 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 347], 99.95th=[ 537], 00:10:41.457 | 99.99th=[ 2606] 00:10:41.457 write: IOPS=3060, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:41.457 slat (nsec): min=19386, max=83108, avg=23824.22, stdev=5495.80 00:10:41.457 clat (usec): min=107, max=390, avg=136.97, stdev=15.85 00:10:41.457 lat (usec): min=128, max=410, avg=160.80, stdev=16.57 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:10:41.457 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:10:41.457 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:10:41.457 | 99.00th=[ 186], 99.50th=[ 196], 99.90th=[ 225], 99.95th=[ 269], 00:10:41.457 | 99.99th=[ 392] 00:10:41.457 bw ( KiB/s): min=12288, max=12288, per=25.05%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.457 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.457 lat (usec) : 250=99.84%, 500=0.12%, 750=0.02% 00:10:41.457 lat (msec) : 4=0.02% 00:10:41.457 cpu : usr=2.00%, sys=8.80%, ctx=5624, majf=0, minf=5 00:10:41.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 issued rwts: total=2560,3064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.457 job3: (groupid=0, jobs=1): err= 0: pid=75240: Thu Nov 28 21:18:04 2024 00:10:41.457 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:41.457 slat (nsec): min=11854, max=43767, avg=14528.07, stdev=3111.03 00:10:41.457 clat (usec): min=136, max=768, avg=179.44, stdev=22.19 00:10:41.457 lat (usec): min=148, max=781, avg=193.97, stdev=22.21 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:41.457 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:41.457 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:10:41.457 | 99.00th=[ 225], 99.50th=[ 281], 99.90th=[ 404], 99.95th=[ 408], 00:10:41.457 | 99.99th=[ 766] 00:10:41.457 write: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:41.457 slat (nsec): min=16169, max=88271, avg=22907.08, stdev=5475.57 00:10:41.457 clat (usec): min=105, max=495, avg=138.23, stdev=16.76 00:10:41.457 lat (usec): min=124, max=517, avg=161.14, stdev=17.46 00:10:41.457 clat percentiles (usec): 00:10:41.457 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:10:41.457 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:10:41.457 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:10:41.457 | 99.00th=[ 182], 99.50th=[ 198], 99.90th=[ 273], 99.95th=[ 355], 00:10:41.457 | 99.99th=[ 494] 00:10:41.457 bw ( KiB/s): min=12288, max=12288, per=25.05%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.457 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.457 lat (usec) : 250=99.61%, 500=0.37%, 1000=0.02% 00:10:41.457 cpu : usr=1.90%, sys=8.60%, ctx=5627, majf=0, minf=9 00:10:41.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.457 issued rwts: total=2560,3067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.457 00:10:41.457 Run status group 0 (all jobs): 00:10:41.457 READ: bw=43.4MiB/s (45.5MB/s), 9.99MiB/s-11.8MiB/s (10.5MB/s-12.3MB/s), io=43.4MiB (45.5MB), run=1001-1001msec 00:10:41.457 WRITE: bw=47.9MiB/s (50.2MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=47.9MiB (50.3MB), run=1001-1001msec 00:10:41.457 00:10:41.457 Disk stats (read/write): 00:10:41.457 nvme0n1: ios=2610/2635, merge=0/0, ticks=450/360, in_queue=810, util=88.08% 00:10:41.457 nvme0n2: ios=2608/2681, merge=0/0, ticks=446/350, in_queue=796, util=89.06% 00:10:41.457 nvme0n3: ios=2270/2560, merge=0/0, ticks=413/375, in_queue=788, util=89.25% 00:10:41.457 nvme0n4: ios=2273/2560, merge=0/0, ticks=417/374, in_queue=791, util=89.71% 00:10:41.457 21:18:04 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:41.457 [global] 00:10:41.457 thread=1 00:10:41.457 invalidate=1 00:10:41.457 rw=randwrite 00:10:41.457 time_based=1 00:10:41.457 runtime=1 00:10:41.457 ioengine=libaio 00:10:41.457 direct=1 00:10:41.457 bs=4096 00:10:41.457 iodepth=1 00:10:41.457 norandommap=0 00:10:41.457 numjobs=1 00:10:41.457 00:10:41.457 verify_dump=1 00:10:41.457 verify_backlog=512 00:10:41.457 verify_state_save=0 00:10:41.457 do_verify=1 00:10:41.457 verify=crc32c-intel 00:10:41.457 [job0] 00:10:41.457 filename=/dev/nvme0n1 00:10:41.457 [job1] 00:10:41.457 filename=/dev/nvme0n2 00:10:41.457 [job2] 00:10:41.457 filename=/dev/nvme0n3 00:10:41.457 [job3] 00:10:41.457 filename=/dev/nvme0n4 00:10:41.457 Could not set queue depth (nvme0n1) 00:10:41.457 Could not set queue depth (nvme0n2) 00:10:41.457 Could not set queue depth (nvme0n3) 00:10:41.457 Could not set queue depth (nvme0n4) 00:10:41.457 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.457 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.457 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.457 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.457 fio-3.35 00:10:41.457 Starting 4 threads 00:10:42.832 00:10:42.832 job0: (groupid=0, jobs=1): err= 0: pid=75293: Thu Nov 28 21:18:06 2024 00:10:42.832 read: IOPS=1072, BW=4292KiB/s (4395kB/s)(4296KiB/1001msec) 00:10:42.832 slat (nsec): min=10923, max=90061, avg=22929.33, stdev=10892.24 00:10:42.832 clat (usec): min=240, max=1149, avg=417.47, stdev=104.55 00:10:42.832 lat (usec): min=254, max=1173, avg=440.40, stdev=107.29 00:10:42.832 clat percentiles (usec): 00:10:42.832 | 1.00th=[ 265], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 334], 00:10:42.832 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 408], 00:10:42.832 | 70.00th=[ 478], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 578], 00:10:42.832 | 99.00th=[ 725], 99.50th=[ 955], 99.90th=[ 1057], 99.95th=[ 1156], 00:10:42.832 | 99.99th=[ 1156] 00:10:42.832 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:42.832 slat (usec): min=15, max=100, avg=31.04, stdev=11.27 00:10:42.832 clat (usec): min=116, max=775, avg=307.50, stdev=103.08 00:10:42.832 lat (usec): min=137, max=813, avg=338.54, stdev=107.98 00:10:42.832 clat percentiles (usec): 00:10:42.832 | 1.00th=[ 124], 5.00th=[ 135], 10.00th=[ 147], 20.00th=[ 229], 00:10:42.832 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 322], 00:10:42.832 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 457], 95.00th=[ 482], 00:10:42.832 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 701], 99.95th=[ 775], 00:10:42.832 | 99.99th=[ 775] 00:10:42.832 bw ( KiB/s): min= 5456, max= 5456, per=20.51%, avg=5456.00, stdev= 0.00, samples=1 00:10:42.832 iops : min= 1364, max= 1364, avg=1364.00, stdev= 0.00, samples=1 00:10:42.832 lat (usec) : 250=14.33%, 500=73.56%, 750=11.69%, 1000=0.23% 00:10:42.832 lat (msec) : 2=0.19% 00:10:42.832 cpu : usr=2.20%, sys=5.20%, ctx=2610, majf=0, minf=9 00:10:42.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.832 issued rwts: total=1074,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.832 job1: (groupid=0, jobs=1): err= 0: pid=75294: Thu Nov 28 21:18:06 2024 00:10:42.832 read: IOPS=1374, BW=5499KiB/s (5630kB/s)(5504KiB/1001msec) 00:10:42.832 slat (nsec): min=10367, max=43329, avg=13717.94, stdev=3660.82 00:10:42.832 clat (usec): min=271, max=737, avg=381.65, stdev=78.80 00:10:42.832 lat (usec): min=290, max=752, avg=395.36, stdev=79.31 00:10:42.832 clat percentiles (usec): 00:10:42.832 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 330], 00:10:42.832 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 371], 00:10:42.833 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 502], 95.00th=[ 578], 00:10:42.833 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 734], 99.95th=[ 734], 00:10:42.833 | 99.99th=[ 734] 00:10:42.833 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:42.833 slat (nsec): min=13108, max=95863, avg=26838.74, stdev=9139.71 00:10:42.833 clat (usec): min=168, max=799, avg=266.28, stdev=65.06 00:10:42.833 lat (usec): min=185, max=824, avg=293.12, stdev=66.80 00:10:42.833 clat percentiles (usec): 00:10:42.833 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 210], 00:10:42.833 | 30.00th=[ 225], 40.00th=[ 247], 50.00th=[ 265], 60.00th=[ 277], 00:10:42.833 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 371], 00:10:42.833 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 750], 99.95th=[ 799], 00:10:42.833 | 99.99th=[ 799] 00:10:42.833 bw ( KiB/s): min= 8192, max= 8192, per=30.80%, avg=8192.00, stdev= 0.00, samples=1 00:10:42.833 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:42.833 lat (usec) : 250=21.81%, 500=72.84%, 750=5.32%, 1000=0.03% 00:10:42.833 cpu : usr=1.60%, sys=4.80%, ctx=2913, majf=0, minf=13 00:10:42.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.833 issued rwts: total=1376,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.833 job2: (groupid=0, jobs=1): err= 0: pid=75295: Thu Nov 28 21:18:06 2024 00:10:42.833 read: IOPS=1690, BW=6761KiB/s (6924kB/s)(6768KiB/1001msec) 00:10:42.833 slat (nsec): min=12026, max=70807, avg=18674.86, stdev=6069.90 00:10:42.833 clat (usec): min=135, max=7686, avg=274.28, stdev=219.49 00:10:42.833 lat (usec): min=150, max=7713, avg=292.96, stdev=221.46 00:10:42.833 clat percentiles (usec): 00:10:42.833 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:10:42.833 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 247], 00:10:42.833 | 70.00th=[ 347], 80.00th=[ 388], 90.00th=[ 445], 95.00th=[ 510], 00:10:42.833 | 99.00th=[ 570], 99.50th=[ 660], 99.90th=[ 1319], 99.95th=[ 7701], 00:10:42.833 | 99.99th=[ 7701] 00:10:42.833 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:42.833 slat (usec): min=16, max=107, avg=29.60, stdev= 9.85 00:10:42.833 clat (usec): min=97, max=2942, avg=212.26, stdev=131.26 00:10:42.833 lat (usec): min=119, max=2966, avg=241.86, stdev=135.28 00:10:42.833 clat percentiles (usec): 00:10:42.833 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 123], 00:10:42.833 | 30.00th=[ 130], 40.00th=[ 139], 50.00th=[ 153], 60.00th=[ 190], 00:10:42.833 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 371], 00:10:42.833 | 99.00th=[ 490], 99.50th=[ 545], 99.90th=[ 1926], 99.95th=[ 1958], 00:10:42.833 | 99.99th=[ 2933] 00:10:42.833 bw ( KiB/s): min=11432, max=11432, per=42.98%, avg=11432.00, stdev= 0.00, samples=1 00:10:42.833 iops : min= 2858, max= 2858, avg=2858.00, stdev= 0.00, samples=1 00:10:42.833 lat (usec) : 100=0.13%, 250=61.12%, 500=35.37%, 750=3.13%, 1000=0.03% 00:10:42.833 lat (msec) : 2=0.16%, 4=0.03%, 10=0.03% 00:10:42.833 cpu : usr=2.10%, sys=7.40%, ctx=3743, majf=0, minf=9 00:10:42.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.833 issued rwts: total=1692,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.833 job3: (groupid=0, jobs=1): err= 0: pid=75296: Thu Nov 28 21:18:06 2024 00:10:42.833 read: IOPS=1374, BW=5499KiB/s (5630kB/s)(5504KiB/1001msec) 00:10:42.833 slat (nsec): min=10405, max=68721, avg=21348.00, stdev=6133.47 00:10:42.833 clat (usec): min=166, max=723, avg=373.00, stdev=76.58 00:10:42.833 lat (usec): min=193, max=746, avg=394.35, stdev=79.00 00:10:42.833 clat percentiles (usec): 00:10:42.833 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 322], 00:10:42.833 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 363], 00:10:42.833 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 490], 95.00th=[ 562], 00:10:42.833 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 725], 99.95th=[ 725], 00:10:42.833 | 99.99th=[ 725] 00:10:42.833 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:42.833 slat (nsec): min=13581, max=66742, avg=24956.74, stdev=7351.63 00:10:42.833 clat (usec): min=159, max=1033, avg=268.50, stdev=68.61 00:10:42.833 lat (usec): min=185, max=1054, avg=293.45, stdev=69.25 00:10:42.833 clat percentiles (usec): 00:10:42.833 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 212], 00:10:42.833 | 30.00th=[ 229], 40.00th=[ 247], 50.00th=[ 265], 60.00th=[ 277], 00:10:42.833 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 375], 00:10:42.833 | 99.00th=[ 515], 99.50th=[ 586], 99.90th=[ 848], 99.95th=[ 1037], 00:10:42.833 | 99.99th=[ 1037] 00:10:42.833 bw ( KiB/s): min= 8192, max= 8192, per=30.80%, avg=8192.00, stdev= 0.00, samples=1 00:10:42.833 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:42.833 lat (usec) : 250=22.12%, 500=73.04%, 750=4.77%, 1000=0.03% 00:10:42.833 lat (msec) : 2=0.03% 00:10:42.833 cpu : usr=1.60%, sys=5.90%, ctx=2912, majf=0, minf=13 00:10:42.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.833 issued rwts: total=1376,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.833 00:10:42.833 Run status group 0 (all jobs): 00:10:42.833 READ: bw=21.5MiB/s (22.6MB/s), 4292KiB/s-6761KiB/s (4395kB/s-6924kB/s), io=21.6MiB (22.6MB), run=1001-1001msec 00:10:42.833 WRITE: bw=26.0MiB/s (27.2MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:10:42.833 00:10:42.833 Disk stats (read/write): 00:10:42.833 nvme0n1: ios=1074/1122, merge=0/0, ticks=466/361, in_queue=827, util=88.88% 00:10:42.833 nvme0n2: ios=1138/1536, merge=0/0, ticks=405/417, in_queue=822, util=89.30% 00:10:42.833 nvme0n3: ios=1553/1847, merge=0/0, ticks=440/388, in_queue=828, util=88.73% 00:10:42.833 nvme0n4: ios=1089/1536, merge=0/0, ticks=396/385, in_queue=781, util=89.88% 00:10:42.833 21:18:06 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:42.833 [global] 00:10:42.833 thread=1 00:10:42.833 invalidate=1 00:10:42.833 rw=write 00:10:42.833 time_based=1 00:10:42.833 runtime=1 00:10:42.833 ioengine=libaio 00:10:42.833 direct=1 00:10:42.833 bs=4096 00:10:42.833 iodepth=128 00:10:42.833 norandommap=0 00:10:42.833 numjobs=1 00:10:42.833 00:10:42.833 verify_dump=1 00:10:42.833 verify_backlog=512 00:10:42.833 verify_state_save=0 00:10:42.833 do_verify=1 00:10:42.833 verify=crc32c-intel 00:10:42.833 [job0] 00:10:42.833 filename=/dev/nvme0n1 00:10:42.833 [job1] 00:10:42.833 filename=/dev/nvme0n2 00:10:42.833 [job2] 00:10:42.833 filename=/dev/nvme0n3 00:10:42.833 [job3] 00:10:42.833 filename=/dev/nvme0n4 00:10:42.833 Could not set queue depth (nvme0n1) 00:10:42.833 Could not set queue depth (nvme0n2) 00:10:42.833 Could not set queue depth (nvme0n3) 00:10:42.833 Could not set queue depth (nvme0n4) 00:10:42.833 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.833 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.833 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.833 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.833 fio-3.35 00:10:42.833 Starting 4 threads 00:10:44.323 00:10:44.323 job0: (groupid=0, jobs=1): err= 0: pid=75355: Thu Nov 28 21:18:07 2024 00:10:44.323 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:10:44.323 slat (usec): min=3, max=2264, avg=68.65, stdev=302.57 00:10:44.323 clat (usec): min=6696, max=10737, avg=9382.60, stdev=533.82 00:10:44.323 lat (usec): min=8310, max=12092, avg=9451.24, stdev=448.33 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:10:44.323 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9503], 00:10:44.323 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10159], 00:10:44.323 | 99.00th=[10421], 99.50th=[10552], 99.90th=[10683], 99.95th=[10683], 00:10:44.323 | 99.99th=[10683] 00:10:44.323 write: IOPS=6906, BW=27.0MiB/s (28.3MB/s)(27.0MiB/1001msec); 0 zone resets 00:10:44.323 slat (usec): min=11, max=1963, avg=71.25, stdev=271.13 00:10:44.323 clat (usec): min=127, max=10374, avg=9266.43, stdev=788.08 00:10:44.323 lat (usec): min=1753, max=11434, avg=9337.67, stdev=746.94 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:10:44.323 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:10:44.323 | 70.00th=[ 9634], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10028], 00:10:44.323 | 99.00th=[10159], 99.50th=[10290], 99.90th=[10290], 99.95th=[10421], 00:10:44.323 | 99.99th=[10421] 00:10:44.323 bw ( KiB/s): min=28176, max=28176, per=53.79%, avg=28176.00, stdev= 0.00, samples=1 00:10:44.323 iops : min= 7044, max= 7044, avg=7044.00, stdev= 0.00, samples=1 00:10:44.323 lat (usec) : 250=0.01% 00:10:44.323 lat (msec) : 2=0.05%, 4=0.18%, 10=92.27%, 20=7.49% 00:10:44.323 cpu : usr=6.30%, sys=17.60%, ctx=451, majf=0, minf=10 00:10:44.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:44.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.323 issued rwts: total=6656,6913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.323 job1: (groupid=0, jobs=1): err= 0: pid=75356: Thu Nov 28 21:18:07 2024 00:10:44.323 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:10:44.323 slat (usec): min=4, max=11308, avg=257.79, stdev=1380.05 00:10:44.323 clat (usec): min=12039, max=47009, avg=33281.10, stdev=7200.17 00:10:44.323 lat (usec): min=12054, max=47035, avg=33538.89, stdev=7114.29 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[12518], 5.00th=[23462], 10.00th=[25560], 20.00th=[27657], 00:10:44.323 | 30.00th=[28443], 40.00th=[28967], 50.00th=[31851], 60.00th=[35914], 00:10:44.323 | 70.00th=[39060], 80.00th=[40109], 90.00th=[41681], 95.00th=[45351], 00:10:44.323 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:10:44.323 | 99.99th=[46924] 00:10:44.323 write: IOPS=2072, BW=8291KiB/s (8490kB/s)(8324KiB/1004msec); 0 zone resets 00:10:44.323 slat (usec): min=11, max=12909, avg=219.24, stdev=1136.95 00:10:44.323 clat (usec): min=1858, max=44429, avg=27675.68, stdev=7035.51 00:10:44.323 lat (usec): min=4277, max=45913, avg=27894.92, stdev=6992.18 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[ 5014], 5.00th=[21627], 10.00th=[21627], 20.00th=[21890], 00:10:44.323 | 30.00th=[22676], 40.00th=[23725], 50.00th=[25560], 60.00th=[29754], 00:10:44.323 | 70.00th=[31327], 80.00th=[33817], 90.00th=[36963], 95.00th=[41681], 00:10:44.323 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:44.323 | 99.99th=[44303] 00:10:44.323 bw ( KiB/s): min= 8192, max= 8192, per=15.64%, avg=8192.00, stdev= 0.00, samples=2 00:10:44.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:44.323 lat (msec) : 2=0.02%, 10=0.78%, 20=1.62%, 50=97.58% 00:10:44.323 cpu : usr=2.19%, sys=5.48%, ctx=130, majf=0, minf=17 00:10:44.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:44.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.323 issued rwts: total=2048,2081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.323 job2: (groupid=0, jobs=1): err= 0: pid=75357: Thu Nov 28 21:18:07 2024 00:10:44.323 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:10:44.323 slat (usec): min=6, max=8519, avg=177.32, stdev=866.90 00:10:44.323 clat (usec): min=14136, max=44839, avg=22013.12, stdev=4325.14 00:10:44.323 lat (usec): min=14154, max=44855, avg=22190.44, stdev=4395.35 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[16057], 5.00th=[18482], 10.00th=[18744], 20.00th=[19792], 00:10:44.323 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:10:44.323 | 70.00th=[21890], 80.00th=[23200], 90.00th=[27657], 95.00th=[31851], 00:10:44.323 | 99.00th=[39584], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:10:44.323 | 99.99th=[44827] 00:10:44.323 write: IOPS=2118, BW=8476KiB/s (8679kB/s)(8552KiB/1009msec); 0 zone resets 00:10:44.323 slat (usec): min=11, max=11462, avg=290.17, stdev=1216.65 00:10:44.323 clat (usec): min=5942, max=90473, avg=38261.96, stdev=16849.79 00:10:44.323 lat (usec): min=10386, max=90507, avg=38552.13, stdev=16959.78 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[12387], 5.00th=[19268], 10.00th=[24773], 20.00th=[26608], 00:10:44.323 | 30.00th=[28967], 40.00th=[30540], 50.00th=[33424], 60.00th=[34866], 00:10:44.323 | 70.00th=[38011], 80.00th=[45351], 90.00th=[68682], 95.00th=[80217], 00:10:44.323 | 99.00th=[89654], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:10:44.323 | 99.99th=[90702] 00:10:44.323 bw ( KiB/s): min= 8192, max= 8192, per=15.64%, avg=8192.00, stdev= 0.00, samples=2 00:10:44.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:44.323 lat (msec) : 10=0.02%, 20=15.81%, 50=74.49%, 100=9.68% 00:10:44.323 cpu : usr=1.98%, sys=7.04%, ctx=255, majf=0, minf=11 00:10:44.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:44.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.323 issued rwts: total=2048,2138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.323 job3: (groupid=0, jobs=1): err= 0: pid=75358: Thu Nov 28 21:18:07 2024 00:10:44.323 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:10:44.323 slat (usec): min=4, max=11271, avg=257.74, stdev=1377.92 00:10:44.323 clat (usec): min=12549, max=46597, avg=33203.25, stdev=7122.86 00:10:44.323 lat (usec): min=12576, max=46624, avg=33460.99, stdev=7036.17 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[12911], 5.00th=[23200], 10.00th=[25297], 20.00th=[27395], 00:10:44.323 | 30.00th=[28181], 40.00th=[28705], 50.00th=[31851], 60.00th=[35914], 00:10:44.323 | 70.00th=[38536], 80.00th=[40109], 90.00th=[41681], 95.00th=[44827], 00:10:44.323 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:10:44.323 | 99.99th=[46400] 00:10:44.323 write: IOPS=2070, BW=8283KiB/s (8481kB/s)(8324KiB/1005msec); 0 zone resets 00:10:44.323 slat (usec): min=13, max=12494, avg=219.52, stdev=1128.63 00:10:44.323 clat (usec): min=1644, max=44426, avg=27806.52, stdev=6996.15 00:10:44.323 lat (usec): min=5189, max=45473, avg=28026.04, stdev=6951.71 00:10:44.323 clat percentiles (usec): 00:10:44.323 | 1.00th=[ 5932], 5.00th=[21365], 10.00th=[21890], 20.00th=[22152], 00:10:44.323 | 30.00th=[22676], 40.00th=[23725], 50.00th=[25560], 60.00th=[30016], 00:10:44.323 | 70.00th=[31589], 80.00th=[33817], 90.00th=[36963], 95.00th=[41681], 00:10:44.323 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:44.323 | 99.99th=[44303] 00:10:44.323 bw ( KiB/s): min= 8192, max= 8192, per=15.64%, avg=8192.00, stdev= 0.00, samples=2 00:10:44.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:44.323 lat (msec) : 2=0.02%, 10=0.78%, 20=1.57%, 50=97.63% 00:10:44.323 cpu : usr=1.79%, sys=7.27%, ctx=130, majf=0, minf=13 00:10:44.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:44.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.323 issued rwts: total=2048,2081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.323 00:10:44.323 Run status group 0 (all jobs): 00:10:44.323 READ: bw=49.6MiB/s (52.0MB/s), 8119KiB/s-26.0MiB/s (8314kB/s-27.2MB/s), io=50.0MiB (52.4MB), run=1001-1009msec 00:10:44.323 WRITE: bw=51.2MiB/s (53.6MB/s), 8283KiB/s-27.0MiB/s (8481kB/s-28.3MB/s), io=51.6MiB (54.1MB), run=1001-1009msec 00:10:44.323 00:10:44.323 Disk stats (read/write): 00:10:44.323 nvme0n1: ios=5682/6025, merge=0/0, ticks=11197/11451, in_queue=22648, util=87.76% 00:10:44.323 nvme0n2: ios=1585/2016, merge=0/0, ticks=11558/11129, in_queue=22687, util=87.85% 00:10:44.323 nvme0n3: ios=1536/1903, merge=0/0, ticks=11730/22662, in_queue=34392, util=89.03% 00:10:44.323 nvme0n4: ios=1536/2048, merge=0/0, ticks=12571/13212, in_queue=25783, util=89.68% 00:10:44.323 21:18:07 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:44.323 [global] 00:10:44.323 thread=1 00:10:44.323 invalidate=1 00:10:44.323 rw=randwrite 00:10:44.323 time_based=1 00:10:44.323 runtime=1 00:10:44.323 ioengine=libaio 00:10:44.323 direct=1 00:10:44.323 bs=4096 00:10:44.323 iodepth=128 00:10:44.323 norandommap=0 00:10:44.323 numjobs=1 00:10:44.323 00:10:44.323 verify_dump=1 00:10:44.323 verify_backlog=512 00:10:44.323 verify_state_save=0 00:10:44.323 do_verify=1 00:10:44.323 verify=crc32c-intel 00:10:44.323 [job0] 00:10:44.323 filename=/dev/nvme0n1 00:10:44.323 [job1] 00:10:44.323 filename=/dev/nvme0n2 00:10:44.323 [job2] 00:10:44.323 filename=/dev/nvme0n3 00:10:44.323 [job3] 00:10:44.323 filename=/dev/nvme0n4 00:10:44.323 Could not set queue depth (nvme0n1) 00:10:44.323 Could not set queue depth (nvme0n2) 00:10:44.324 Could not set queue depth (nvme0n3) 00:10:44.324 Could not set queue depth (nvme0n4) 00:10:44.324 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.324 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.324 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.324 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.324 fio-3.35 00:10:44.324 Starting 4 threads 00:10:45.703 00:10:45.703 job0: (groupid=0, jobs=1): err= 0: pid=75417: Thu Nov 28 21:18:09 2024 00:10:45.703 read: IOPS=4056, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:10:45.703 slat (usec): min=8, max=28247, avg=129.53, stdev=967.07 00:10:45.703 clat (usec): min=1920, max=46680, avg=18085.24, stdev=5801.31 00:10:45.703 lat (usec): min=8825, max=46719, avg=18214.76, stdev=5855.77 00:10:45.703 clat percentiles (usec): 00:10:45.703 | 1.00th=[ 9634], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:10:45.703 | 30.00th=[13566], 40.00th=[14615], 50.00th=[18220], 60.00th=[18744], 00:10:45.703 | 70.00th=[19792], 80.00th=[20841], 90.00th=[26084], 95.00th=[27395], 00:10:45.703 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:10:45.703 | 99.99th=[46924] 00:10:45.703 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:45.703 slat (usec): min=6, max=15609, avg=107.72, stdev=687.53 00:10:45.703 clat (usec): min=4920, max=38185, avg=13114.02, stdev=4114.29 00:10:45.704 lat (usec): min=7378, max=38230, avg=13221.74, stdev=4099.22 00:10:45.704 clat percentiles (usec): 00:10:45.704 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10028], 00:10:45.704 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11469], 60.00th=[12256], 00:10:45.704 | 70.00th=[13698], 80.00th=[17695], 90.00th=[19792], 95.00th=[20055], 00:10:45.704 | 99.00th=[26346], 99.50th=[26608], 99.90th=[28967], 99.95th=[28967], 00:10:45.704 | 99.99th=[38011] 00:10:45.704 bw ( KiB/s): min=16384, max=16416, per=31.05%, avg=16400.00, stdev=22.63, samples=2 00:10:45.704 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:10:45.704 lat (msec) : 2=0.01%, 10=10.26%, 20=73.66%, 50=16.07% 00:10:45.704 cpu : usr=4.87%, sys=10.63%, ctx=175, majf=0, minf=9 00:10:45.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:45.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.704 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.704 job1: (groupid=0, jobs=1): err= 0: pid=75418: Thu Nov 28 21:18:09 2024 00:10:45.704 read: IOPS=1181, BW=4725KiB/s (4839kB/s)(4744KiB/1004msec) 00:10:45.704 slat (usec): min=4, max=18920, avg=328.43, stdev=1441.32 00:10:45.704 clat (usec): min=3067, max=74198, avg=42708.57, stdev=13274.52 00:10:45.704 lat (usec): min=3082, max=75005, avg=43037.00, stdev=13331.16 00:10:45.704 clat percentiles (usec): 00:10:45.704 | 1.00th=[ 3294], 5.00th=[22414], 10.00th=[31327], 20.00th=[34866], 00:10:45.704 | 30.00th=[36439], 40.00th=[38011], 50.00th=[41157], 60.00th=[44303], 00:10:45.704 | 70.00th=[48497], 80.00th=[54789], 90.00th=[62129], 95.00th=[63701], 00:10:45.704 | 99.00th=[69731], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:10:45.704 | 99.99th=[73925] 00:10:45.704 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:10:45.704 slat (usec): min=5, max=21388, avg=389.53, stdev=1664.43 00:10:45.704 clat (msec): min=14, max=111, avg=47.89, stdev=25.93 00:10:45.704 lat (msec): min=17, max=111, avg=48.28, stdev=26.12 00:10:45.704 clat percentiles (msec): 00:10:45.704 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 28], 00:10:45.704 | 30.00th=[ 30], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 40], 00:10:45.704 | 70.00th=[ 66], 80.00th=[ 77], 90.00th=[ 94], 95.00th=[ 95], 00:10:45.704 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 109], 99.95th=[ 112], 00:10:45.704 | 99.99th=[ 112] 00:10:45.704 bw ( KiB/s): min= 5668, max= 6608, per=11.62%, avg=6138.00, stdev=664.68, samples=2 00:10:45.704 iops : min= 1417, max= 1652, avg=1534.50, stdev=166.17, samples=2 00:10:45.704 lat (msec) : 4=1.03%, 10=0.26%, 20=5.14%, 50=63.01%, 100=30.46% 00:10:45.704 lat (msec) : 250=0.11% 00:10:45.704 cpu : usr=1.89%, sys=4.09%, ctx=352, majf=0, minf=21 00:10:45.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:10:45.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.704 issued rwts: total=1186,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.704 job2: (groupid=0, jobs=1): err= 0: pid=75419: Thu Nov 28 21:18:09 2024 00:10:45.704 read: IOPS=1265, BW=5063KiB/s (5185kB/s)(5104KiB/1008msec) 00:10:45.704 slat (usec): min=4, max=14146, avg=372.79, stdev=1496.98 00:10:45.704 clat (usec): min=4156, max=88287, avg=44239.63, stdev=15269.77 00:10:45.704 lat (usec): min=9584, max=91270, avg=44612.42, stdev=15371.93 00:10:45.704 clat percentiles (usec): 00:10:45.704 | 1.00th=[ 9765], 5.00th=[22938], 10.00th=[28443], 20.00th=[33817], 00:10:45.704 | 30.00th=[36439], 40.00th=[37487], 50.00th=[40109], 60.00th=[44303], 00:10:45.704 | 70.00th=[50070], 80.00th=[58983], 90.00th=[63701], 95.00th=[76022], 00:10:45.704 | 99.00th=[86508], 99.50th=[87557], 99.90th=[87557], 99.95th=[88605], 00:10:45.704 | 99.99th=[88605] 00:10:45.704 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:10:45.704 slat (usec): min=6, max=15205, avg=334.29, stdev=1448.39 00:10:45.704 clat (msec): min=13, max=103, avg=46.37, stdev=25.01 00:10:45.704 lat (msec): min=13, max=103, avg=46.70, stdev=25.20 00:10:45.704 clat percentiles (msec): 00:10:45.704 | 1.00th=[ 17], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 27], 00:10:45.704 | 30.00th=[ 29], 40.00th=[ 36], 50.00th=[ 38], 60.00th=[ 40], 00:10:45.704 | 70.00th=[ 47], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 95], 00:10:45.704 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 104], 99.95th=[ 105], 00:10:45.704 | 99.99th=[ 105] 00:10:45.704 bw ( KiB/s): min= 5616, max= 6672, per=11.63%, avg=6144.00, stdev=746.70, samples=2 00:10:45.704 iops : min= 1404, max= 1668, avg=1536.00, stdev=186.68, samples=2 00:10:45.704 lat (msec) : 10=0.78%, 20=3.45%, 50=68.53%, 100=27.10%, 250=0.14% 00:10:45.704 cpu : usr=1.39%, sys=4.77%, ctx=327, majf=0, minf=9 00:10:45.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:10:45.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.704 issued rwts: total=1276,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.704 job3: (groupid=0, jobs=1): err= 0: pid=75420: Thu Nov 28 21:18:09 2024 00:10:45.704 read: IOPS=5816, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1003msec) 00:10:45.704 slat (usec): min=4, max=9031, avg=80.36, stdev=492.98 00:10:45.704 clat (usec): min=955, max=20761, avg=11056.79, stdev=2083.30 00:10:45.704 lat (usec): min=4106, max=20783, avg=11137.15, stdev=2087.33 00:10:45.704 clat percentiles (usec): 00:10:45.704 | 1.00th=[ 5997], 5.00th=[ 7635], 10.00th=[ 9634], 20.00th=[10159], 00:10:45.704 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:10:45.704 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13698], 95.00th=[15533], 00:10:45.704 | 99.00th=[17695], 99.50th=[19006], 99.90th=[20317], 99.95th=[20579], 00:10:45.704 | 99.99th=[20841] 00:10:45.704 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:45.704 slat (usec): min=5, max=6786, avg=79.43, stdev=453.94 00:10:45.704 clat (usec): min=2694, max=20594, avg=10190.76, stdev=1858.88 00:10:45.704 lat (usec): min=2708, max=20602, avg=10270.20, stdev=1832.57 00:10:45.704 clat percentiles (usec): 00:10:45.704 | 1.00th=[ 4490], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 9241], 00:10:45.704 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:10:45.704 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12649], 95.00th=[13566], 00:10:45.704 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:10:45.704 | 99.99th=[20579] 00:10:45.704 bw ( KiB/s): min=24576, max=24625, per=46.57%, avg=24600.50, stdev=34.65, samples=2 00:10:45.704 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:10:45.704 lat (usec) : 1000=0.01% 00:10:45.704 lat (msec) : 4=0.40%, 10=28.13%, 20=71.39%, 50=0.07% 00:10:45.704 cpu : usr=5.68%, sys=14.96%, ctx=332, majf=0, minf=8 00:10:45.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:45.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.704 issued rwts: total=5834,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.704 00:10:45.704 Run status group 0 (all jobs): 00:10:45.704 READ: bw=48.0MiB/s (50.3MB/s), 4725KiB/s-22.7MiB/s (4839kB/s-23.8MB/s), io=48.4MiB (50.7MB), run=1003-1008msec 00:10:45.704 WRITE: bw=51.6MiB/s (54.1MB/s), 6095KiB/s-23.9MiB/s (6242kB/s-25.1MB/s), io=52.0MiB (54.5MB), run=1003-1008msec 00:10:45.704 00:10:45.704 Disk stats (read/write): 00:10:45.704 nvme0n1: ios=3626/3656, merge=0/0, ticks=60127/41748, in_queue=101875, util=88.57% 00:10:45.704 nvme0n2: ios=1073/1072, merge=0/0, ticks=22267/29172, in_queue=51439, util=86.44% 00:10:45.704 nvme0n3: ios=1024/1280, merge=0/0, ticks=24627/29192, in_queue=53819, util=88.66% 00:10:45.704 nvme0n4: ios=5077/5120, merge=0/0, ticks=53134/48151, in_queue=101285, util=89.83% 00:10:45.704 21:18:09 -- target/fio.sh@55 -- # sync 00:10:45.704 21:18:09 -- target/fio.sh@59 -- # fio_pid=75433 00:10:45.704 21:18:09 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:45.704 21:18:09 -- target/fio.sh@61 -- # sleep 3 00:10:45.704 [global] 00:10:45.704 thread=1 00:10:45.704 invalidate=1 00:10:45.704 rw=read 00:10:45.704 time_based=1 00:10:45.704 runtime=10 00:10:45.704 ioengine=libaio 00:10:45.704 direct=1 00:10:45.704 bs=4096 00:10:45.704 iodepth=1 00:10:45.704 norandommap=1 00:10:45.704 numjobs=1 00:10:45.704 00:10:45.704 [job0] 00:10:45.704 filename=/dev/nvme0n1 00:10:45.704 [job1] 00:10:45.704 filename=/dev/nvme0n2 00:10:45.704 [job2] 00:10:45.704 filename=/dev/nvme0n3 00:10:45.704 [job3] 00:10:45.704 filename=/dev/nvme0n4 00:10:45.704 Could not set queue depth (nvme0n1) 00:10:45.704 Could not set queue depth (nvme0n2) 00:10:45.704 Could not set queue depth (nvme0n3) 00:10:45.704 Could not set queue depth (nvme0n4) 00:10:45.704 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.704 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.704 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.704 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.704 fio-3.35 00:10:45.704 Starting 4 threads 00:10:48.990 21:18:12 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:48.990 fio: pid=75476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.990 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=65323008, buflen=4096 00:10:48.990 21:18:12 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:48.990 fio: pid=75475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.990 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70221824, buflen=4096 00:10:48.990 21:18:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.990 21:18:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:49.249 fio: pid=75473, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.249 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10887168, buflen=4096 00:10:49.507 21:18:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.508 21:18:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:49.508 fio: pid=75474, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.508 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16220160, buflen=4096 00:10:49.765 00:10:49.765 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75473: Thu Nov 28 21:18:13 2024 00:10:49.765 read: IOPS=5411, BW=21.1MiB/s (22.2MB/s)(74.4MiB/3519msec) 00:10:49.765 slat (usec): min=8, max=11273, avg=16.86, stdev=151.38 00:10:49.765 clat (usec): min=4, max=2818, avg=166.59, stdev=38.89 00:10:49.765 lat (usec): min=126, max=11456, avg=183.45, stdev=157.73 00:10:49.765 clat percentiles (usec): 00:10:49.765 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:10:49.765 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:49.765 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 219], 00:10:49.765 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 351], 99.95th=[ 586], 00:10:49.765 | 99.99th=[ 2409] 00:10:49.765 bw ( KiB/s): min=21392, max=22920, per=29.49%, avg=22532.00, stdev=574.73, samples=6 00:10:49.765 iops : min= 5348, max= 5730, avg=5633.00, stdev=143.68, samples=6 00:10:49.765 lat (usec) : 10=0.01%, 100=0.01%, 250=98.90%, 500=1.02%, 750=0.03% 00:10:49.765 lat (usec) : 1000=0.01% 00:10:49.765 lat (msec) : 2=0.01%, 4=0.01% 00:10:49.765 cpu : usr=1.65%, sys=7.13%, ctx=19063, majf=0, minf=1 00:10:49.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.765 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.765 issued rwts: total=19043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.766 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75474: Thu Nov 28 21:18:13 2024 00:10:49.766 read: IOPS=5361, BW=20.9MiB/s (22.0MB/s)(79.5MiB/3795msec) 00:10:49.766 slat (usec): min=7, max=11844, avg=16.14, stdev=157.46 00:10:49.766 clat (usec): min=3, max=14556, avg=169.06, stdev=149.34 00:10:49.766 lat (usec): min=125, max=14605, avg=185.21, stdev=218.08 00:10:49.766 clat percentiles (usec): 00:10:49.766 | 1.00th=[ 130], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:49.766 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:49.766 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 198], 95.00th=[ 223], 00:10:49.766 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 461], 99.95th=[ 799], 00:10:49.766 | 99.99th=[ 4113] 00:10:49.766 bw ( KiB/s): min=15524, max=22968, per=28.18%, avg=21529.71, stdev=2661.73, samples=7 00:10:49.766 iops : min= 3881, max= 5742, avg=5382.43, stdev=665.43, samples=7 00:10:49.766 lat (usec) : 4=0.01%, 100=0.01%, 250=98.66%, 500=1.23%, 750=0.03% 00:10:49.766 lat (usec) : 1000=0.01% 00:10:49.766 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01% 00:10:49.766 cpu : usr=1.29%, sys=6.77%, ctx=20362, majf=0, minf=2 00:10:49.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.766 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.766 issued rwts: total=20345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.766 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75475: Thu Nov 28 21:18:13 2024 00:10:49.766 read: IOPS=5267, BW=20.6MiB/s (21.6MB/s)(67.0MiB/3255msec) 00:10:49.766 slat (usec): min=11, max=13415, avg=15.20, stdev=122.60 00:10:49.766 clat (usec): min=89, max=2667, avg=173.41, stdev=38.21 00:10:49.766 lat (usec): min=140, max=13626, avg=188.61, stdev=128.68 00:10:49.766 clat percentiles (usec): 00:10:49.766 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:49.766 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:49.766 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:10:49.766 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 594], 99.95th=[ 816], 00:10:49.766 | 99.99th=[ 1876] 00:10:49.766 bw ( KiB/s): min=20608, max=21816, per=27.80%, avg=21236.00, stdev=438.08, samples=6 00:10:49.766 iops : min= 5152, max= 5454, avg=5309.00, stdev=109.52, samples=6 00:10:49.766 lat (usec) : 100=0.01%, 250=98.86%, 500=0.94%, 750=0.12%, 1000=0.03% 00:10:49.766 lat (msec) : 2=0.02%, 4=0.01% 00:10:49.766 cpu : usr=1.26%, sys=6.76%, ctx=17150, majf=0, minf=2 00:10:49.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.766 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.766 issued rwts: total=17145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.766 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75476: Thu Nov 28 21:18:13 2024 00:10:49.766 read: IOPS=5348, BW=20.9MiB/s (21.9MB/s)(62.3MiB/2982msec) 00:10:49.766 slat (nsec): min=11837, max=74196, avg=14693.63, stdev=3235.97 00:10:49.766 clat (usec): min=132, max=3063, avg=170.94, stdev=28.42 00:10:49.766 lat (usec): min=145, max=3084, avg=185.63, stdev=28.69 00:10:49.766 clat percentiles (usec): 00:10:49.766 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:49.766 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:49.766 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:10:49.766 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 241], 99.95th=[ 437], 00:10:49.766 | 99.99th=[ 807] 00:10:49.766 bw ( KiB/s): min=20888, max=21760, per=28.03%, avg=21414.40, stdev=322.86, samples=5 00:10:49.766 iops : min= 5222, max= 5440, avg=5353.60, stdev=80.71, samples=5 00:10:49.766 lat (usec) : 250=99.91%, 500=0.04%, 750=0.03%, 1000=0.01% 00:10:49.766 lat (msec) : 4=0.01% 00:10:49.766 cpu : usr=1.58%, sys=6.51%, ctx=15949, majf=0, minf=2 00:10:49.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.766 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.766 issued rwts: total=15949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.766 00:10:49.766 Run status group 0 (all jobs): 00:10:49.766 READ: bw=74.6MiB/s (78.2MB/s), 20.6MiB/s-21.1MiB/s (21.6MB/s-22.2MB/s), io=283MiB (297MB), run=2982-3795msec 00:10:49.766 00:10:49.766 Disk stats (read/write): 00:10:49.766 nvme0n1: ios=18369/0, merge=0/0, ticks=3075/0, in_queue=3075, util=95.19% 00:10:49.766 nvme0n2: ios=19281/0, merge=0/0, ticks=3297/0, in_queue=3297, util=95.51% 00:10:49.766 nvme0n3: ios=16393/0, merge=0/0, ticks=2899/0, in_queue=2899, util=96.15% 00:10:49.766 nvme0n4: ios=15345/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.73% 00:10:49.766 21:18:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.766 21:18:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:50.024 21:18:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.024 21:18:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:50.283 21:18:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.283 21:18:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:50.283 21:18:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.283 21:18:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:50.542 21:18:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.542 21:18:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:50.800 21:18:14 -- target/fio.sh@69 -- # fio_status=0 00:10:50.800 21:18:14 -- target/fio.sh@70 -- # wait 75433 00:10:50.800 21:18:14 -- target/fio.sh@70 -- # fio_status=4 00:10:50.800 21:18:14 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.800 21:18:14 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.800 21:18:14 -- common/autotest_common.sh@1208 -- # local i=0 00:10:50.800 21:18:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.800 21:18:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:51.059 21:18:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:51.059 21:18:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.059 nvmf hotplug test: fio failed as expected 00:10:51.059 21:18:14 -- common/autotest_common.sh@1220 -- # return 0 00:10:51.059 21:18:14 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:51.059 21:18:14 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:51.059 21:18:14 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.059 21:18:14 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:51.059 21:18:14 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:51.059 21:18:14 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:51.059 21:18:14 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:51.059 21:18:14 -- target/fio.sh@91 -- # nvmftestfini 00:10:51.059 21:18:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:51.059 21:18:14 -- nvmf/common.sh@116 -- # sync 00:10:51.318 21:18:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:51.318 21:18:14 -- nvmf/common.sh@119 -- # set +e 00:10:51.318 21:18:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:51.318 21:18:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:51.318 rmmod nvme_tcp 00:10:51.318 rmmod nvme_fabrics 00:10:51.318 rmmod nvme_keyring 00:10:51.318 21:18:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:51.318 21:18:14 -- nvmf/common.sh@123 -- # set -e 00:10:51.318 21:18:14 -- nvmf/common.sh@124 -- # return 0 00:10:51.318 21:18:14 -- nvmf/common.sh@477 -- # '[' -n 75059 ']' 00:10:51.318 21:18:14 -- nvmf/common.sh@478 -- # killprocess 75059 00:10:51.318 21:18:14 -- common/autotest_common.sh@936 -- # '[' -z 75059 ']' 00:10:51.318 21:18:14 -- common/autotest_common.sh@940 -- # kill -0 75059 00:10:51.318 21:18:14 -- common/autotest_common.sh@941 -- # uname 00:10:51.318 21:18:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:51.318 21:18:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75059 00:10:51.318 killing process with pid 75059 00:10:51.318 21:18:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:51.318 21:18:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:51.318 21:18:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75059' 00:10:51.318 21:18:14 -- common/autotest_common.sh@955 -- # kill 75059 00:10:51.318 21:18:14 -- common/autotest_common.sh@960 -- # wait 75059 00:10:51.318 21:18:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:51.318 21:18:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:51.318 21:18:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:51.318 21:18:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.318 21:18:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:51.318 21:18:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.318 21:18:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.318 21:18:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.318 21:18:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:51.576 00:10:51.576 real 0m18.623s 00:10:51.576 user 1m9.160s 00:10:51.576 sys 0m10.905s 00:10:51.576 ************************************ 00:10:51.576 END TEST nvmf_fio_target 00:10:51.576 ************************************ 00:10:51.576 21:18:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:51.576 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.576 21:18:15 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:51.576 21:18:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:51.576 21:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.576 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.576 ************************************ 00:10:51.576 START TEST nvmf_bdevio 00:10:51.576 ************************************ 00:10:51.576 21:18:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:51.576 * Looking for test storage... 00:10:51.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.576 21:18:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:51.576 21:18:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:51.576 21:18:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:51.576 21:18:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:51.576 21:18:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:51.576 21:18:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:51.576 21:18:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:51.576 21:18:15 -- scripts/common.sh@335 -- # IFS=.-: 00:10:51.576 21:18:15 -- scripts/common.sh@335 -- # read -ra ver1 00:10:51.576 21:18:15 -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.576 21:18:15 -- scripts/common.sh@336 -- # read -ra ver2 00:10:51.576 21:18:15 -- scripts/common.sh@337 -- # local 'op=<' 00:10:51.576 21:18:15 -- scripts/common.sh@339 -- # ver1_l=2 00:10:51.576 21:18:15 -- scripts/common.sh@340 -- # ver2_l=1 00:10:51.576 21:18:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:51.576 21:18:15 -- scripts/common.sh@343 -- # case "$op" in 00:10:51.576 21:18:15 -- scripts/common.sh@344 -- # : 1 00:10:51.576 21:18:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:51.576 21:18:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.576 21:18:15 -- scripts/common.sh@364 -- # decimal 1 00:10:51.576 21:18:15 -- scripts/common.sh@352 -- # local d=1 00:10:51.576 21:18:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.576 21:18:15 -- scripts/common.sh@354 -- # echo 1 00:10:51.576 21:18:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:51.576 21:18:15 -- scripts/common.sh@365 -- # decimal 2 00:10:51.576 21:18:15 -- scripts/common.sh@352 -- # local d=2 00:10:51.576 21:18:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.576 21:18:15 -- scripts/common.sh@354 -- # echo 2 00:10:51.576 21:18:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:51.576 21:18:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:51.576 21:18:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:51.576 21:18:15 -- scripts/common.sh@367 -- # return 0 00:10:51.576 21:18:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.576 21:18:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.576 --rc genhtml_branch_coverage=1 00:10:51.576 --rc genhtml_function_coverage=1 00:10:51.576 --rc genhtml_legend=1 00:10:51.576 --rc geninfo_all_blocks=1 00:10:51.576 --rc geninfo_unexecuted_blocks=1 00:10:51.576 00:10:51.576 ' 00:10:51.576 21:18:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.576 --rc genhtml_branch_coverage=1 00:10:51.576 --rc genhtml_function_coverage=1 00:10:51.576 --rc genhtml_legend=1 00:10:51.576 --rc geninfo_all_blocks=1 00:10:51.576 --rc geninfo_unexecuted_blocks=1 00:10:51.576 00:10:51.576 ' 00:10:51.576 21:18:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.576 --rc genhtml_branch_coverage=1 00:10:51.576 --rc genhtml_function_coverage=1 00:10:51.576 --rc genhtml_legend=1 00:10:51.576 --rc geninfo_all_blocks=1 00:10:51.576 --rc geninfo_unexecuted_blocks=1 00:10:51.576 00:10:51.576 ' 00:10:51.576 21:18:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.576 --rc genhtml_branch_coverage=1 00:10:51.576 --rc genhtml_function_coverage=1 00:10:51.576 --rc genhtml_legend=1 00:10:51.576 --rc geninfo_all_blocks=1 00:10:51.576 --rc geninfo_unexecuted_blocks=1 00:10:51.576 00:10:51.576 ' 00:10:51.576 21:18:15 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.576 21:18:15 -- nvmf/common.sh@7 -- # uname -s 00:10:51.576 21:18:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.576 21:18:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.576 21:18:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.576 21:18:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.577 21:18:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.577 21:18:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.577 21:18:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.577 21:18:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.577 21:18:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.577 21:18:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.577 21:18:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:51.577 21:18:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:51.577 21:18:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.577 21:18:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.577 21:18:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.577 21:18:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.577 21:18:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.577 21:18:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.577 21:18:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.577 21:18:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.577 21:18:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.577 21:18:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.577 21:18:15 -- paths/export.sh@5 -- # export PATH 00:10:51.577 21:18:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.577 21:18:15 -- nvmf/common.sh@46 -- # : 0 00:10:51.577 21:18:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:51.577 21:18:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:51.577 21:18:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:51.577 21:18:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.577 21:18:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.577 21:18:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:51.577 21:18:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:51.577 21:18:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:51.577 21:18:15 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.577 21:18:15 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.577 21:18:15 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:51.577 21:18:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:51.577 21:18:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.577 21:18:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:51.577 21:18:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:51.577 21:18:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:51.577 21:18:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.577 21:18:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.577 21:18:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.835 21:18:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:51.835 21:18:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:51.835 21:18:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:51.835 21:18:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:51.835 21:18:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:51.835 21:18:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:51.835 21:18:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.835 21:18:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.835 21:18:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:51.835 21:18:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:51.835 21:18:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.835 21:18:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.835 21:18:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.835 21:18:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.835 21:18:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.835 21:18:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.836 21:18:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.836 21:18:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.836 21:18:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:51.836 21:18:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:51.836 Cannot find device "nvmf_tgt_br" 00:10:51.836 21:18:15 -- nvmf/common.sh@154 -- # true 00:10:51.836 21:18:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.836 Cannot find device "nvmf_tgt_br2" 00:10:51.836 21:18:15 -- nvmf/common.sh@155 -- # true 00:10:51.836 21:18:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:51.836 21:18:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:51.836 Cannot find device "nvmf_tgt_br" 00:10:51.836 21:18:15 -- nvmf/common.sh@157 -- # true 00:10:51.836 21:18:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:51.836 Cannot find device "nvmf_tgt_br2" 00:10:51.836 21:18:15 -- nvmf/common.sh@158 -- # true 00:10:51.836 21:18:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:51.836 21:18:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:51.836 21:18:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.836 21:18:15 -- nvmf/common.sh@161 -- # true 00:10:51.836 21:18:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.836 21:18:15 -- nvmf/common.sh@162 -- # true 00:10:51.836 21:18:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.836 21:18:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.836 21:18:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.836 21:18:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.836 21:18:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.836 21:18:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.836 21:18:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.836 21:18:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:51.836 21:18:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:51.836 21:18:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:51.836 21:18:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:51.836 21:18:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:51.836 21:18:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:51.836 21:18:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.836 21:18:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.836 21:18:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:52.095 21:18:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:52.095 21:18:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:52.095 21:18:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:52.095 21:18:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:52.095 21:18:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:52.095 21:18:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:52.095 21:18:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:52.095 21:18:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:52.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:10:52.095 00:10:52.095 --- 10.0.0.2 ping statistics --- 00:10:52.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.095 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:52.095 21:18:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:52.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:52.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:52.095 00:10:52.095 --- 10.0.0.3 ping statistics --- 00:10:52.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.095 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:52.095 21:18:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:52.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:52.095 00:10:52.095 --- 10.0.0.1 ping statistics --- 00:10:52.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.095 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:52.095 21:18:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.095 21:18:15 -- nvmf/common.sh@421 -- # return 0 00:10:52.095 21:18:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:52.095 21:18:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.095 21:18:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:52.095 21:18:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:52.095 21:18:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.095 21:18:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:52.095 21:18:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:52.095 21:18:15 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:52.095 21:18:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:52.095 21:18:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:52.096 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:10:52.096 21:18:15 -- nvmf/common.sh@469 -- # nvmfpid=75754 00:10:52.096 21:18:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:52.096 21:18:15 -- nvmf/common.sh@470 -- # waitforlisten 75754 00:10:52.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.096 21:18:15 -- common/autotest_common.sh@829 -- # '[' -z 75754 ']' 00:10:52.096 21:18:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.096 21:18:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.096 21:18:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.096 21:18:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.096 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:10:52.096 [2024-11-28 21:18:15.726869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:52.096 [2024-11-28 21:18:15.727164] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.354 [2024-11-28 21:18:15.864725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.354 [2024-11-28 21:18:15.898919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:52.354 [2024-11-28 21:18:15.899375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.354 [2024-11-28 21:18:15.899397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.354 [2024-11-28 21:18:15.899407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.354 [2024-11-28 21:18:15.899556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:52.354 [2024-11-28 21:18:15.899639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:52.354 [2024-11-28 21:18:15.899736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:52.354 [2024-11-28 21:18:15.899743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.354 21:18:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.354 21:18:15 -- common/autotest_common.sh@862 -- # return 0 00:10:52.354 21:18:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:52.354 21:18:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.354 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:10:52.354 21:18:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.354 21:18:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.354 21:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.354 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.354 [2024-11-28 21:18:16.027828] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.354 21:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.354 21:18:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.354 21:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.354 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.354 Malloc0 00:10:52.354 21:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.354 21:18:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.354 21:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.354 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.354 21:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.354 21:18:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.355 21:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.355 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.355 21:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.355 21:18:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.355 21:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.355 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.355 [2024-11-28 21:18:16.087723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.355 21:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.355 21:18:16 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:52.355 21:18:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:52.355 21:18:16 -- nvmf/common.sh@520 -- # config=() 00:10:52.355 21:18:16 -- nvmf/common.sh@520 -- # local subsystem config 00:10:52.355 21:18:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:52.355 21:18:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:52.355 { 00:10:52.355 "params": { 00:10:52.355 "name": "Nvme$subsystem", 00:10:52.355 "trtype": "$TEST_TRANSPORT", 00:10:52.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.355 "adrfam": "ipv4", 00:10:52.355 "trsvcid": "$NVMF_PORT", 00:10:52.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.355 "hdgst": ${hdgst:-false}, 00:10:52.355 "ddgst": ${ddgst:-false} 00:10:52.355 }, 00:10:52.355 "method": "bdev_nvme_attach_controller" 00:10:52.355 } 00:10:52.355 EOF 00:10:52.355 )") 00:10:52.355 21:18:16 -- nvmf/common.sh@542 -- # cat 00:10:52.613 21:18:16 -- nvmf/common.sh@544 -- # jq . 00:10:52.613 21:18:16 -- nvmf/common.sh@545 -- # IFS=, 00:10:52.613 21:18:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:52.614 "params": { 00:10:52.614 "name": "Nvme1", 00:10:52.614 "trtype": "tcp", 00:10:52.614 "traddr": "10.0.0.2", 00:10:52.614 "adrfam": "ipv4", 00:10:52.614 "trsvcid": "4420", 00:10:52.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.614 "hdgst": false, 00:10:52.614 "ddgst": false 00:10:52.614 }, 00:10:52.614 "method": "bdev_nvme_attach_controller" 00:10:52.614 }' 00:10:52.614 [2024-11-28 21:18:16.141892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:52.614 [2024-11-28 21:18:16.142150] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75777 ] 00:10:52.614 [2024-11-28 21:18:16.282087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.614 [2024-11-28 21:18:16.317900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.614 [2024-11-28 21:18:16.318015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.614 [2024-11-28 21:18:16.318017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.873 [2024-11-28 21:18:16.445483] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:52.873 [2024-11-28 21:18:16.445980] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:52.873 I/O targets: 00:10:52.873 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:52.873 00:10:52.873 00:10:52.873 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.873 http://cunit.sourceforge.net/ 00:10:52.873 00:10:52.873 00:10:52.873 Suite: bdevio tests on: Nvme1n1 00:10:52.873 Test: blockdev write read block ...passed 00:10:52.873 Test: blockdev write zeroes read block ...passed 00:10:52.873 Test: blockdev write zeroes read no split ...passed 00:10:52.873 Test: blockdev write zeroes read split ...passed 00:10:52.873 Test: blockdev write zeroes read split partial ...passed 00:10:52.873 Test: blockdev reset ...[2024-11-28 21:18:16.474234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:52.873 [2024-11-28 21:18:16.474562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c52a0 (9): Bad file descriptor 00:10:52.873 [2024-11-28 21:18:16.493955] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:52.873 passed 00:10:52.873 Test: blockdev write read 8 blocks ...passed 00:10:52.873 Test: blockdev write read size > 128k ...passed 00:10:52.873 Test: blockdev write read invalid size ...passed 00:10:52.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.873 Test: blockdev write read max offset ...passed 00:10:52.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.873 Test: blockdev writev readv 8 blocks ...passed 00:10:52.873 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.873 Test: blockdev writev readv block ...passed 00:10:52.873 Test: blockdev writev readv size > 128k ...passed 00:10:52.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.873 Test: blockdev comparev and writev ...[2024-11-28 21:18:16.503550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.503602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.503626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.503637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.503919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.503937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.503954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.503964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.504271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.504290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.504306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.504317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.504610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.504632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.504652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.873 [2024-11-28 21:18:16.504662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.873 passed 00:10:52.873 Test: blockdev nvme passthru rw ...passed 00:10:52.873 Test: blockdev nvme passthru vendor specific ...[2024-11-28 21:18:16.506208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.873 [2024-11-28 21:18:16.506303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.506429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.873 [2024-11-28 21:18:16.506452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.873 [2024-11-28 21:18:16.506770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.873 [2024-11-28 21:18:16.506804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.874 [2024-11-28 21:18:16.506917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.874 [2024-11-28 21:18:16.506940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.874 passed 00:10:52.874 Test: blockdev nvme admin passthru ...passed 00:10:52.874 Test: blockdev copy ...passed 00:10:52.874 00:10:52.874 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.874 suites 1 1 n/a 0 0 00:10:52.874 tests 23 23 23 0 0 00:10:52.874 asserts 152 152 152 0 n/a 00:10:52.874 00:10:52.874 Elapsed time = 0.151 seconds 00:10:53.134 21:18:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.134 21:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.134 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:53.134 21:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.134 21:18:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:53.134 21:18:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:53.134 21:18:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:53.134 21:18:16 -- nvmf/common.sh@116 -- # sync 00:10:53.134 21:18:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:53.134 21:18:16 -- nvmf/common.sh@119 -- # set +e 00:10:53.134 21:18:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:53.134 21:18:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:53.134 rmmod nvme_tcp 00:10:53.134 rmmod nvme_fabrics 00:10:53.134 rmmod nvme_keyring 00:10:53.134 21:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:53.134 21:18:16 -- nvmf/common.sh@123 -- # set -e 00:10:53.134 21:18:16 -- nvmf/common.sh@124 -- # return 0 00:10:53.134 21:18:16 -- nvmf/common.sh@477 -- # '[' -n 75754 ']' 00:10:53.134 21:18:16 -- nvmf/common.sh@478 -- # killprocess 75754 00:10:53.134 21:18:16 -- common/autotest_common.sh@936 -- # '[' -z 75754 ']' 00:10:53.134 21:18:16 -- common/autotest_common.sh@940 -- # kill -0 75754 00:10:53.134 21:18:16 -- common/autotest_common.sh@941 -- # uname 00:10:53.134 21:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.134 21:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75754 00:10:53.134 killing process with pid 75754 00:10:53.134 21:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:53.134 21:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:53.134 21:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75754' 00:10:53.134 21:18:16 -- common/autotest_common.sh@955 -- # kill 75754 00:10:53.134 21:18:16 -- common/autotest_common.sh@960 -- # wait 75754 00:10:53.394 21:18:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:53.394 21:18:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:53.394 21:18:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:53.394 21:18:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.394 21:18:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:53.394 21:18:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.394 21:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.394 21:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.394 21:18:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:53.394 00:10:53.394 real 0m1.865s 00:10:53.394 user 0m5.187s 00:10:53.394 sys 0m0.616s 00:10:53.394 21:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.394 ************************************ 00:10:53.394 END TEST nvmf_bdevio 00:10:53.394 ************************************ 00:10:53.394 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:10:53.394 21:18:17 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:53.394 21:18:17 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:53.394 21:18:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:53.394 21:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.394 21:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:53.394 ************************************ 00:10:53.394 START TEST nvmf_bdevio_no_huge 00:10:53.394 ************************************ 00:10:53.394 21:18:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:53.394 * Looking for test storage... 00:10:53.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.394 21:18:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:53.394 21:18:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:53.394 21:18:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:53.653 21:18:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:53.653 21:18:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:53.653 21:18:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:53.653 21:18:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:53.653 21:18:17 -- scripts/common.sh@335 -- # IFS=.-: 00:10:53.653 21:18:17 -- scripts/common.sh@335 -- # read -ra ver1 00:10:53.653 21:18:17 -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.653 21:18:17 -- scripts/common.sh@336 -- # read -ra ver2 00:10:53.653 21:18:17 -- scripts/common.sh@337 -- # local 'op=<' 00:10:53.653 21:18:17 -- scripts/common.sh@339 -- # ver1_l=2 00:10:53.653 21:18:17 -- scripts/common.sh@340 -- # ver2_l=1 00:10:53.653 21:18:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:53.653 21:18:17 -- scripts/common.sh@343 -- # case "$op" in 00:10:53.653 21:18:17 -- scripts/common.sh@344 -- # : 1 00:10:53.653 21:18:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:53.653 21:18:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.653 21:18:17 -- scripts/common.sh@364 -- # decimal 1 00:10:53.653 21:18:17 -- scripts/common.sh@352 -- # local d=1 00:10:53.653 21:18:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.653 21:18:17 -- scripts/common.sh@354 -- # echo 1 00:10:53.653 21:18:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:53.653 21:18:17 -- scripts/common.sh@365 -- # decimal 2 00:10:53.653 21:18:17 -- scripts/common.sh@352 -- # local d=2 00:10:53.653 21:18:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.653 21:18:17 -- scripts/common.sh@354 -- # echo 2 00:10:53.653 21:18:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:53.653 21:18:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:53.653 21:18:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:53.653 21:18:17 -- scripts/common.sh@367 -- # return 0 00:10:53.653 21:18:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.653 21:18:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.653 --rc genhtml_branch_coverage=1 00:10:53.653 --rc genhtml_function_coverage=1 00:10:53.653 --rc genhtml_legend=1 00:10:53.653 --rc geninfo_all_blocks=1 00:10:53.653 --rc geninfo_unexecuted_blocks=1 00:10:53.653 00:10:53.653 ' 00:10:53.653 21:18:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.653 --rc genhtml_branch_coverage=1 00:10:53.653 --rc genhtml_function_coverage=1 00:10:53.653 --rc genhtml_legend=1 00:10:53.653 --rc geninfo_all_blocks=1 00:10:53.653 --rc geninfo_unexecuted_blocks=1 00:10:53.653 00:10:53.653 ' 00:10:53.653 21:18:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.653 --rc genhtml_branch_coverage=1 00:10:53.653 --rc genhtml_function_coverage=1 00:10:53.653 --rc genhtml_legend=1 00:10:53.653 --rc geninfo_all_blocks=1 00:10:53.653 --rc geninfo_unexecuted_blocks=1 00:10:53.653 00:10:53.653 ' 00:10:53.653 21:18:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:53.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.653 --rc genhtml_branch_coverage=1 00:10:53.653 --rc genhtml_function_coverage=1 00:10:53.653 --rc genhtml_legend=1 00:10:53.653 --rc geninfo_all_blocks=1 00:10:53.653 --rc geninfo_unexecuted_blocks=1 00:10:53.653 00:10:53.653 ' 00:10:53.653 21:18:17 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.653 21:18:17 -- nvmf/common.sh@7 -- # uname -s 00:10:53.653 21:18:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.653 21:18:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.653 21:18:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.653 21:18:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.653 21:18:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.653 21:18:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.653 21:18:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.653 21:18:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.653 21:18:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.653 21:18:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.653 21:18:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:53.653 21:18:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:53.653 21:18:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.653 21:18:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.653 21:18:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.653 21:18:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.653 21:18:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.653 21:18:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.653 21:18:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.653 21:18:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.653 21:18:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.653 21:18:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.653 21:18:17 -- paths/export.sh@5 -- # export PATH 00:10:53.653 21:18:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.653 21:18:17 -- nvmf/common.sh@46 -- # : 0 00:10:53.653 21:18:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:53.653 21:18:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:53.653 21:18:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:53.653 21:18:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.653 21:18:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.654 21:18:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:53.654 21:18:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:53.654 21:18:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:53.654 21:18:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.654 21:18:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.654 21:18:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:53.654 21:18:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:53.654 21:18:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.654 21:18:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:53.654 21:18:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:53.654 21:18:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:53.654 21:18:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.654 21:18:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.654 21:18:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.654 21:18:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:53.654 21:18:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:53.654 21:18:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:53.654 21:18:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:53.654 21:18:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:53.654 21:18:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:53.654 21:18:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.654 21:18:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.654 21:18:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:53.654 21:18:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:53.654 21:18:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.654 21:18:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.654 21:18:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.654 21:18:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.654 21:18:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.654 21:18:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.654 21:18:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.654 21:18:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.654 21:18:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:53.654 21:18:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:53.654 Cannot find device "nvmf_tgt_br" 00:10:53.654 21:18:17 -- nvmf/common.sh@154 -- # true 00:10:53.654 21:18:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.654 Cannot find device "nvmf_tgt_br2" 00:10:53.654 21:18:17 -- nvmf/common.sh@155 -- # true 00:10:53.654 21:18:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:53.654 21:18:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:53.654 Cannot find device "nvmf_tgt_br" 00:10:53.654 21:18:17 -- nvmf/common.sh@157 -- # true 00:10:53.654 21:18:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:53.654 Cannot find device "nvmf_tgt_br2" 00:10:53.654 21:18:17 -- nvmf/common.sh@158 -- # true 00:10:53.654 21:18:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:53.654 21:18:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:53.654 21:18:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.654 21:18:17 -- nvmf/common.sh@161 -- # true 00:10:53.654 21:18:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.654 21:18:17 -- nvmf/common.sh@162 -- # true 00:10:53.654 21:18:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.654 21:18:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.654 21:18:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:53.913 21:18:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:53.913 21:18:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:53.913 21:18:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:53.913 21:18:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:53.913 21:18:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:53.913 21:18:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:53.913 21:18:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:53.913 21:18:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:53.913 21:18:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:53.913 21:18:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:53.913 21:18:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:53.913 21:18:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:53.913 21:18:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:53.913 21:18:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:53.913 21:18:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:53.913 21:18:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:53.913 21:18:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:53.913 21:18:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:53.913 21:18:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:53.913 21:18:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:53.913 21:18:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:53.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:53.913 00:10:53.913 --- 10.0.0.2 ping statistics --- 00:10:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.913 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:53.913 21:18:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:53.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:53.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:53.913 00:10:53.913 --- 10.0.0.3 ping statistics --- 00:10:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.913 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:53.913 21:18:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:53.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:53.913 00:10:53.913 --- 10.0.0.1 ping statistics --- 00:10:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.913 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:53.913 21:18:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.913 21:18:17 -- nvmf/common.sh@421 -- # return 0 00:10:53.913 21:18:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:53.913 21:18:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.913 21:18:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:53.913 21:18:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:53.913 21:18:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.913 21:18:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:53.913 21:18:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:53.913 21:18:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:53.913 21:18:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:53.913 21:18:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.913 21:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:53.913 21:18:17 -- nvmf/common.sh@469 -- # nvmfpid=75956 00:10:53.913 21:18:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:53.913 21:18:17 -- nvmf/common.sh@470 -- # waitforlisten 75956 00:10:53.913 21:18:17 -- common/autotest_common.sh@829 -- # '[' -z 75956 ']' 00:10:53.913 21:18:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.913 21:18:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.913 21:18:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.913 21:18:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.913 21:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:53.913 [2024-11-28 21:18:17.630995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:53.913 [2024-11-28 21:18:17.631110] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:54.172 [2024-11-28 21:18:17.763772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.172 [2024-11-28 21:18:17.835849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:54.172 [2024-11-28 21:18:17.836010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.172 [2024-11-28 21:18:17.836022] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.172 [2024-11-28 21:18:17.836045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.172 [2024-11-28 21:18:17.836200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.172 [2024-11-28 21:18:17.836440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:54.172 [2024-11-28 21:18:17.836576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:54.172 [2024-11-28 21:18:17.836582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.108 21:18:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.109 21:18:18 -- common/autotest_common.sh@862 -- # return 0 00:10:55.109 21:18:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:55.109 21:18:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.109 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 21:18:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.109 21:18:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.109 21:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.109 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 [2024-11-28 21:18:18.688990] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.109 21:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.109 21:18:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.109 21:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.109 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 Malloc0 00:10:55.109 21:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.109 21:18:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.109 21:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.109 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 21:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.109 21:18:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.109 21:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.109 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 21:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.109 21:18:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.109 21:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.109 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 [2024-11-28 21:18:18.733192] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.109 21:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.109 21:18:18 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:55.109 21:18:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:55.109 21:18:18 -- nvmf/common.sh@520 -- # config=() 00:10:55.109 21:18:18 -- nvmf/common.sh@520 -- # local subsystem config 00:10:55.109 21:18:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:55.109 21:18:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:55.109 { 00:10:55.109 "params": { 00:10:55.109 "name": "Nvme$subsystem", 00:10:55.109 "trtype": "$TEST_TRANSPORT", 00:10:55.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.109 "adrfam": "ipv4", 00:10:55.109 "trsvcid": "$NVMF_PORT", 00:10:55.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.109 "hdgst": ${hdgst:-false}, 00:10:55.109 "ddgst": ${ddgst:-false} 00:10:55.109 }, 00:10:55.109 "method": "bdev_nvme_attach_controller" 00:10:55.109 } 00:10:55.109 EOF 00:10:55.109 )") 00:10:55.109 21:18:18 -- nvmf/common.sh@542 -- # cat 00:10:55.109 21:18:18 -- nvmf/common.sh@544 -- # jq . 00:10:55.109 21:18:18 -- nvmf/common.sh@545 -- # IFS=, 00:10:55.109 21:18:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:55.109 "params": { 00:10:55.109 "name": "Nvme1", 00:10:55.109 "trtype": "tcp", 00:10:55.109 "traddr": "10.0.0.2", 00:10:55.109 "adrfam": "ipv4", 00:10:55.109 "trsvcid": "4420", 00:10:55.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.109 "hdgst": false, 00:10:55.109 "ddgst": false 00:10:55.109 }, 00:10:55.109 "method": "bdev_nvme_attach_controller" 00:10:55.109 }' 00:10:55.109 [2024-11-28 21:18:18.784807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:55.109 [2024-11-28 21:18:18.785385] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75999 ] 00:10:55.368 [2024-11-28 21:18:18.924566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.368 [2024-11-28 21:18:19.030280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.368 [2024-11-28 21:18:19.030417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.368 [2024-11-28 21:18:19.030424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.627 [2024-11-28 21:18:19.184524] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:55.627 [2024-11-28 21:18:19.184865] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:55.627 I/O targets: 00:10:55.627 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:55.627 00:10:55.627 00:10:55.627 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.627 http://cunit.sourceforge.net/ 00:10:55.627 00:10:55.627 00:10:55.627 Suite: bdevio tests on: Nvme1n1 00:10:55.627 Test: blockdev write read block ...passed 00:10:55.627 Test: blockdev write zeroes read block ...passed 00:10:55.627 Test: blockdev write zeroes read no split ...passed 00:10:55.627 Test: blockdev write zeroes read split ...passed 00:10:55.627 Test: blockdev write zeroes read split partial ...passed 00:10:55.627 Test: blockdev reset ...[2024-11-28 21:18:19.222169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:55.627 [2024-11-28 21:18:19.222461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7c760 (9): Bad file descriptor 00:10:55.627 [2024-11-28 21:18:19.242629] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:55.627 passed 00:10:55.627 Test: blockdev write read 8 blocks ...passed 00:10:55.627 Test: blockdev write read size > 128k ...passed 00:10:55.627 Test: blockdev write read invalid size ...passed 00:10:55.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:55.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:55.627 Test: blockdev write read max offset ...passed 00:10:55.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:55.627 Test: blockdev writev readv 8 blocks ...passed 00:10:55.627 Test: blockdev writev readv 30 x 1block ...passed 00:10:55.627 Test: blockdev writev readv block ...passed 00:10:55.627 Test: blockdev writev readv size > 128k ...passed 00:10:55.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:55.627 Test: blockdev comparev and writev ...[2024-11-28 21:18:19.253778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.627 [2024-11-28 21:18:19.253841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:55.627 [2024-11-28 21:18:19.253880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.627 [2024-11-28 21:18:19.253890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:55.627 [2024-11-28 21:18:19.254172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.627 [2024-11-28 21:18:19.254191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:55.627 [2024-11-28 21:18:19.254207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.627 [2024-11-28 21:18:19.254216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.254471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.628 [2024-11-28 21:18:19.254487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.254502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.628 [2024-11-28 21:18:19.254512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.254767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.628 [2024-11-28 21:18:19.254782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.254798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.628 [2024-11-28 21:18:19.254808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:55.628 passed 00:10:55.628 Test: blockdev nvme passthru rw ...passed 00:10:55.628 Test: blockdev nvme passthru vendor specific ...[2024-11-28 21:18:19.256135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.628 [2024-11-28 21:18:19.256283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.256591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.628 [2024-11-28 21:18:19.256624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.256830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.628 [2024-11-28 21:18:19.256855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:55.628 [2024-11-28 21:18:19.257189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.628 [2024-11-28 21:18:19.257224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:55.628 passed 00:10:55.628 Test: blockdev nvme admin passthru ...passed 00:10:55.628 Test: blockdev copy ...passed 00:10:55.628 00:10:55.628 Run Summary: Type Total Ran Passed Failed Inactive 00:10:55.628 suites 1 1 n/a 0 0 00:10:55.628 tests 23 23 23 0 0 00:10:55.628 asserts 152 152 152 0 n/a 00:10:55.628 00:10:55.628 Elapsed time = 0.173 seconds 00:10:55.887 21:18:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.887 21:18:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.887 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:10:55.887 21:18:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.887 21:18:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:55.887 21:18:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:55.887 21:18:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:55.887 21:18:19 -- nvmf/common.sh@116 -- # sync 00:10:55.887 21:18:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:55.887 21:18:19 -- nvmf/common.sh@119 -- # set +e 00:10:55.887 21:18:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:55.887 21:18:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:55.887 rmmod nvme_tcp 00:10:55.887 rmmod nvme_fabrics 00:10:55.887 rmmod nvme_keyring 00:10:56.146 21:18:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:56.147 21:18:19 -- nvmf/common.sh@123 -- # set -e 00:10:56.147 21:18:19 -- nvmf/common.sh@124 -- # return 0 00:10:56.147 21:18:19 -- nvmf/common.sh@477 -- # '[' -n 75956 ']' 00:10:56.147 21:18:19 -- nvmf/common.sh@478 -- # killprocess 75956 00:10:56.147 21:18:19 -- common/autotest_common.sh@936 -- # '[' -z 75956 ']' 00:10:56.147 21:18:19 -- common/autotest_common.sh@940 -- # kill -0 75956 00:10:56.147 21:18:19 -- common/autotest_common.sh@941 -- # uname 00:10:56.147 21:18:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:56.147 21:18:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75956 00:10:56.147 21:18:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:56.147 21:18:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:56.147 21:18:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75956' 00:10:56.147 killing process with pid 75956 00:10:56.147 21:18:19 -- common/autotest_common.sh@955 -- # kill 75956 00:10:56.147 21:18:19 -- common/autotest_common.sh@960 -- # wait 75956 00:10:56.406 21:18:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:56.406 21:18:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:56.406 21:18:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:56.406 21:18:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.406 21:18:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:56.406 21:18:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.406 21:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.406 21:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.406 21:18:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:56.406 00:10:56.406 real 0m2.994s 00:10:56.406 user 0m9.835s 00:10:56.406 sys 0m1.047s 00:10:56.406 21:18:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:56.406 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:56.406 ************************************ 00:10:56.406 END TEST nvmf_bdevio_no_huge 00:10:56.406 ************************************ 00:10:56.406 21:18:20 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:56.406 21:18:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:56.406 21:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.406 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:56.406 ************************************ 00:10:56.406 START TEST nvmf_tls 00:10:56.406 ************************************ 00:10:56.406 21:18:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:56.406 * Looking for test storage... 00:10:56.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.406 21:18:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:56.406 21:18:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:56.406 21:18:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:56.666 21:18:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:56.666 21:18:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:56.666 21:18:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:56.666 21:18:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:56.666 21:18:20 -- scripts/common.sh@335 -- # IFS=.-: 00:10:56.666 21:18:20 -- scripts/common.sh@335 -- # read -ra ver1 00:10:56.666 21:18:20 -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.666 21:18:20 -- scripts/common.sh@336 -- # read -ra ver2 00:10:56.666 21:18:20 -- scripts/common.sh@337 -- # local 'op=<' 00:10:56.666 21:18:20 -- scripts/common.sh@339 -- # ver1_l=2 00:10:56.666 21:18:20 -- scripts/common.sh@340 -- # ver2_l=1 00:10:56.666 21:18:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:56.666 21:18:20 -- scripts/common.sh@343 -- # case "$op" in 00:10:56.666 21:18:20 -- scripts/common.sh@344 -- # : 1 00:10:56.666 21:18:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:56.666 21:18:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.666 21:18:20 -- scripts/common.sh@364 -- # decimal 1 00:10:56.666 21:18:20 -- scripts/common.sh@352 -- # local d=1 00:10:56.666 21:18:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.666 21:18:20 -- scripts/common.sh@354 -- # echo 1 00:10:56.666 21:18:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:56.666 21:18:20 -- scripts/common.sh@365 -- # decimal 2 00:10:56.666 21:18:20 -- scripts/common.sh@352 -- # local d=2 00:10:56.666 21:18:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.666 21:18:20 -- scripts/common.sh@354 -- # echo 2 00:10:56.666 21:18:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:56.666 21:18:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:56.666 21:18:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:56.666 21:18:20 -- scripts/common.sh@367 -- # return 0 00:10:56.666 21:18:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.666 21:18:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:56.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.666 --rc genhtml_branch_coverage=1 00:10:56.666 --rc genhtml_function_coverage=1 00:10:56.666 --rc genhtml_legend=1 00:10:56.666 --rc geninfo_all_blocks=1 00:10:56.666 --rc geninfo_unexecuted_blocks=1 00:10:56.666 00:10:56.666 ' 00:10:56.666 21:18:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:56.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.666 --rc genhtml_branch_coverage=1 00:10:56.666 --rc genhtml_function_coverage=1 00:10:56.666 --rc genhtml_legend=1 00:10:56.666 --rc geninfo_all_blocks=1 00:10:56.666 --rc geninfo_unexecuted_blocks=1 00:10:56.666 00:10:56.666 ' 00:10:56.666 21:18:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:56.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.666 --rc genhtml_branch_coverage=1 00:10:56.666 --rc genhtml_function_coverage=1 00:10:56.666 --rc genhtml_legend=1 00:10:56.666 --rc geninfo_all_blocks=1 00:10:56.666 --rc geninfo_unexecuted_blocks=1 00:10:56.666 00:10:56.666 ' 00:10:56.666 21:18:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:56.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.666 --rc genhtml_branch_coverage=1 00:10:56.666 --rc genhtml_function_coverage=1 00:10:56.666 --rc genhtml_legend=1 00:10:56.666 --rc geninfo_all_blocks=1 00:10:56.666 --rc geninfo_unexecuted_blocks=1 00:10:56.666 00:10:56.666 ' 00:10:56.666 21:18:20 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.666 21:18:20 -- nvmf/common.sh@7 -- # uname -s 00:10:56.666 21:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.666 21:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.666 21:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.666 21:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.666 21:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.666 21:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.666 21:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.666 21:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.666 21:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.666 21:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.666 21:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:56.666 21:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:10:56.666 21:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.666 21:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.666 21:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:56.666 21:18:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.666 21:18:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.666 21:18:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.666 21:18:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.666 21:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.666 21:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.667 21:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.667 21:18:20 -- paths/export.sh@5 -- # export PATH 00:10:56.667 21:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.667 21:18:20 -- nvmf/common.sh@46 -- # : 0 00:10:56.667 21:18:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:56.667 21:18:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:56.667 21:18:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:56.667 21:18:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.667 21:18:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.667 21:18:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:56.667 21:18:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:56.667 21:18:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:56.667 21:18:20 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.667 21:18:20 -- target/tls.sh@71 -- # nvmftestinit 00:10:56.667 21:18:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:56.667 21:18:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.667 21:18:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:56.667 21:18:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:56.667 21:18:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:56.667 21:18:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.667 21:18:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.667 21:18:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.667 21:18:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:56.667 21:18:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:56.667 21:18:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:56.667 21:18:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:56.667 21:18:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:56.667 21:18:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:56.667 21:18:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.667 21:18:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.667 21:18:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:56.667 21:18:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:56.667 21:18:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:56.667 21:18:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:56.667 21:18:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:56.667 21:18:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.667 21:18:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:56.667 21:18:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:56.667 21:18:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:56.667 21:18:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:56.667 21:18:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:56.667 21:18:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:56.667 Cannot find device "nvmf_tgt_br" 00:10:56.667 21:18:20 -- nvmf/common.sh@154 -- # true 00:10:56.667 21:18:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.667 Cannot find device "nvmf_tgt_br2" 00:10:56.667 21:18:20 -- nvmf/common.sh@155 -- # true 00:10:56.667 21:18:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:56.667 21:18:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:56.667 Cannot find device "nvmf_tgt_br" 00:10:56.667 21:18:20 -- nvmf/common.sh@157 -- # true 00:10:56.667 21:18:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:56.667 Cannot find device "nvmf_tgt_br2" 00:10:56.667 21:18:20 -- nvmf/common.sh@158 -- # true 00:10:56.667 21:18:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:56.667 21:18:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:56.927 21:18:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:56.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.927 21:18:20 -- nvmf/common.sh@161 -- # true 00:10:56.927 21:18:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:56.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.927 21:18:20 -- nvmf/common.sh@162 -- # true 00:10:56.927 21:18:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:56.927 21:18:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:56.927 21:18:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:56.927 21:18:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:56.927 21:18:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:56.927 21:18:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:56.927 21:18:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:56.927 21:18:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:56.927 21:18:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:56.927 21:18:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:56.927 21:18:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:56.927 21:18:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:56.927 21:18:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:56.927 21:18:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:56.927 21:18:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:56.927 21:18:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:56.927 21:18:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:56.927 21:18:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:56.927 21:18:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:56.927 21:18:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:56.927 21:18:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:56.927 21:18:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:56.927 21:18:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:56.927 21:18:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:56.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:56.927 00:10:56.927 --- 10.0.0.2 ping statistics --- 00:10:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.927 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:56.927 21:18:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:56.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:56.927 00:10:56.927 --- 10.0.0.3 ping statistics --- 00:10:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.927 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:56.927 21:18:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:56.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:10:56.927 00:10:56.927 --- 10.0.0.1 ping statistics --- 00:10:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.927 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:56.927 21:18:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.927 21:18:20 -- nvmf/common.sh@421 -- # return 0 00:10:56.927 21:18:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:56.927 21:18:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.927 21:18:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:56.927 21:18:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:56.927 21:18:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.927 21:18:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:56.927 21:18:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:56.927 21:18:20 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:56.927 21:18:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:56.927 21:18:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.927 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:56.927 21:18:20 -- nvmf/common.sh@469 -- # nvmfpid=76175 00:10:56.927 21:18:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:56.927 21:18:20 -- nvmf/common.sh@470 -- # waitforlisten 76175 00:10:56.927 21:18:20 -- common/autotest_common.sh@829 -- # '[' -z 76175 ']' 00:10:56.927 21:18:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.927 21:18:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.927 21:18:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.927 21:18:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.927 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:57.187 [2024-11-28 21:18:20.691920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:57.187 [2024-11-28 21:18:20.692018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.187 [2024-11-28 21:18:20.836109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.187 [2024-11-28 21:18:20.875606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:57.187 [2024-11-28 21:18:20.875778] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.187 [2024-11-28 21:18:20.875801] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.187 [2024-11-28 21:18:20.875812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.187 [2024-11-28 21:18:20.875847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.445 21:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.445 21:18:20 -- common/autotest_common.sh@862 -- # return 0 00:10:57.445 21:18:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:57.445 21:18:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.445 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:57.445 21:18:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.445 21:18:20 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:57.445 21:18:20 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:57.704 true 00:10:57.704 21:18:21 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:57.704 21:18:21 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:57.962 21:18:21 -- target/tls.sh@82 -- # version=0 00:10:57.962 21:18:21 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:57.962 21:18:21 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:58.221 21:18:21 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:58.221 21:18:21 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:58.481 21:18:22 -- target/tls.sh@90 -- # version=13 00:10:58.481 21:18:22 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:58.481 21:18:22 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:58.739 21:18:22 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:58.739 21:18:22 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:58.997 21:18:22 -- target/tls.sh@98 -- # version=7 00:10:58.997 21:18:22 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:58.997 21:18:22 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:58.997 21:18:22 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:59.255 21:18:22 -- target/tls.sh@105 -- # ktls=false 00:10:59.255 21:18:22 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:59.255 21:18:22 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:59.514 21:18:22 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:59.514 21:18:22 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:59.514 21:18:23 -- target/tls.sh@113 -- # ktls=true 00:10:59.514 21:18:23 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:59.514 21:18:23 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:59.772 21:18:23 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:59.772 21:18:23 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:00.030 21:18:23 -- target/tls.sh@121 -- # ktls=false 00:11:00.031 21:18:23 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:00.031 21:18:23 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:00.031 21:18:23 -- target/tls.sh@49 -- # local key hash crc 00:11:00.031 21:18:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:00.031 21:18:23 -- target/tls.sh@51 -- # hash=01 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # gzip -1 -c 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # tail -c8 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # head -c 4 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # crc='p$H�' 00:11:00.031 21:18:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:00.031 21:18:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:00.031 21:18:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:00.031 21:18:23 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:00.031 21:18:23 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:00.031 21:18:23 -- target/tls.sh@49 -- # local key hash crc 00:11:00.031 21:18:23 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:00.031 21:18:23 -- target/tls.sh@51 -- # hash=01 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # gzip -1 -c 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # tail -c8 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # head -c 4 00:11:00.031 21:18:23 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:00.031 21:18:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:00.031 21:18:23 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:00.031 21:18:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:00.031 21:18:23 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:00.031 21:18:23 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:00.031 21:18:23 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:00.031 21:18:23 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:00.031 21:18:23 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:00.031 21:18:23 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:00.031 21:18:23 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:00.031 21:18:23 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:00.289 21:18:23 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:00.548 21:18:24 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:00.548 21:18:24 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:00.548 21:18:24 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:00.807 [2024-11-28 21:18:24.469508] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.807 21:18:24 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:01.066 21:18:24 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:01.326 [2024-11-28 21:18:24.933600] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:01.326 [2024-11-28 21:18:24.933798] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.326 21:18:24 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:01.584 malloc0 00:11:01.584 21:18:25 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:01.843 21:18:25 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.101 21:18:25 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:12.145 Initializing NVMe Controllers 00:11:12.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:12.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:12.145 Initialization complete. Launching workers. 00:11:12.145 ======================================================== 00:11:12.145 Latency(us) 00:11:12.145 Device Information : IOPS MiB/s Average min max 00:11:12.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10654.47 41.62 6008.15 1562.40 11815.11 00:11:12.145 ======================================================== 00:11:12.145 Total : 10654.47 41.62 6008.15 1562.40 11815.11 00:11:12.145 00:11:12.145 21:18:35 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:12.145 21:18:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:12.145 21:18:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:12.145 21:18:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:12.145 21:18:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:12.145 21:18:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:12.145 21:18:35 -- target/tls.sh@28 -- # bdevperf_pid=76409 00:11:12.145 21:18:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:12.145 21:18:35 -- target/tls.sh@31 -- # waitforlisten 76409 /var/tmp/bdevperf.sock 00:11:12.145 21:18:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:12.145 21:18:35 -- common/autotest_common.sh@829 -- # '[' -z 76409 ']' 00:11:12.145 21:18:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:12.145 21:18:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.145 21:18:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:12.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:12.145 21:18:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.145 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:11:12.145 [2024-11-28 21:18:35.868437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:12.145 [2024-11-28 21:18:35.868821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76409 ] 00:11:12.403 [2024-11-28 21:18:36.019901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.403 [2024-11-28 21:18:36.061400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.339 21:18:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.339 21:18:36 -- common/autotest_common.sh@862 -- # return 0 00:11:13.339 21:18:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:13.598 [2024-11-28 21:18:37.109364] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:13.598 TLSTESTn1 00:11:13.598 21:18:37 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:13.598 Running I/O for 10 seconds... 00:11:25.801 00:11:25.802 Latency(us) 00:11:25.802 [2024-11-28T21:18:49.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.802 [2024-11-28T21:18:49.545Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:25.802 Verification LBA range: start 0x0 length 0x2000 00:11:25.802 TLSTESTn1 : 10.02 5666.02 22.13 0.00 0.00 22547.79 6017.40 25976.09 00:11:25.802 [2024-11-28T21:18:49.545Z] =================================================================================================================== 00:11:25.802 [2024-11-28T21:18:49.545Z] Total : 5666.02 22.13 0.00 0.00 22547.79 6017.40 25976.09 00:11:25.802 0 00:11:25.802 21:18:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.802 21:18:47 -- target/tls.sh@45 -- # killprocess 76409 00:11:25.802 21:18:47 -- common/autotest_common.sh@936 -- # '[' -z 76409 ']' 00:11:25.802 21:18:47 -- common/autotest_common.sh@940 -- # kill -0 76409 00:11:25.802 21:18:47 -- common/autotest_common.sh@941 -- # uname 00:11:25.802 21:18:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.802 21:18:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76409 00:11:25.802 killing process with pid 76409 00:11:25.802 Received shutdown signal, test time was about 10.000000 seconds 00:11:25.802 00:11:25.802 Latency(us) 00:11:25.802 [2024-11-28T21:18:49.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.802 [2024-11-28T21:18:49.545Z] =================================================================================================================== 00:11:25.802 [2024-11-28T21:18:49.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:25.802 21:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:25.802 21:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:25.802 21:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76409' 00:11:25.802 21:18:47 -- common/autotest_common.sh@955 -- # kill 76409 00:11:25.802 21:18:47 -- common/autotest_common.sh@960 -- # wait 76409 00:11:25.802 21:18:47 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:25.802 21:18:47 -- common/autotest_common.sh@650 -- # local es=0 00:11:25.802 21:18:47 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:25.802 21:18:47 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:25.802 21:18:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.802 21:18:47 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:25.802 21:18:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.802 21:18:47 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:25.802 21:18:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:25.802 21:18:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:25.802 21:18:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:25.802 21:18:47 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:25.802 21:18:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:25.802 21:18:47 -- target/tls.sh@28 -- # bdevperf_pid=76548 00:11:25.802 21:18:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:25.802 21:18:47 -- target/tls.sh@31 -- # waitforlisten 76548 /var/tmp/bdevperf.sock 00:11:25.802 21:18:47 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:25.802 21:18:47 -- common/autotest_common.sh@829 -- # '[' -z 76548 ']' 00:11:25.802 21:18:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:25.802 21:18:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.802 21:18:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:25.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:25.802 21:18:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.802 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:11:25.802 [2024-11-28 21:18:47.604195] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:25.802 [2024-11-28 21:18:47.604480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76548 ] 00:11:25.802 [2024-11-28 21:18:47.743989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.802 [2024-11-28 21:18:47.777601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.802 21:18:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.802 21:18:48 -- common/autotest_common.sh@862 -- # return 0 00:11:25.802 21:18:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:25.802 [2024-11-28 21:18:48.800517] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:25.802 [2024-11-28 21:18:48.805906] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:25.802 [2024-11-28 21:18:48.806528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9b80 (107): Transport endpoint is not connected 00:11:25.802 [2024-11-28 21:18:48.807519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9b80 (9): Bad file descriptor 00:11:25.802 [2024-11-28 21:18:48.808513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:25.802 [2024-11-28 21:18:48.808537] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:25.802 [2024-11-28 21:18:48.808564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:25.802 request: 00:11:25.802 { 00:11:25.802 "name": "TLSTEST", 00:11:25.802 "trtype": "tcp", 00:11:25.802 "traddr": "10.0.0.2", 00:11:25.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.802 "adrfam": "ipv4", 00:11:25.802 "trsvcid": "4420", 00:11:25.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.802 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:25.802 "method": "bdev_nvme_attach_controller", 00:11:25.802 "req_id": 1 00:11:25.802 } 00:11:25.802 Got JSON-RPC error response 00:11:25.802 response: 00:11:25.802 { 00:11:25.802 "code": -32602, 00:11:25.802 "message": "Invalid parameters" 00:11:25.802 } 00:11:25.802 21:18:48 -- target/tls.sh@36 -- # killprocess 76548 00:11:25.802 21:18:48 -- common/autotest_common.sh@936 -- # '[' -z 76548 ']' 00:11:25.802 21:18:48 -- common/autotest_common.sh@940 -- # kill -0 76548 00:11:25.802 21:18:48 -- common/autotest_common.sh@941 -- # uname 00:11:25.802 21:18:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.802 21:18:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76548 00:11:25.802 killing process with pid 76548 00:11:25.802 Received shutdown signal, test time was about 10.000000 seconds 00:11:25.802 00:11:25.802 Latency(us) 00:11:25.802 [2024-11-28T21:18:49.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.802 [2024-11-28T21:18:49.545Z] =================================================================================================================== 00:11:25.802 [2024-11-28T21:18:49.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:25.802 21:18:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:25.802 21:18:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:25.802 21:18:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76548' 00:11:25.802 21:18:48 -- common/autotest_common.sh@955 -- # kill 76548 00:11:25.802 21:18:48 -- common/autotest_common.sh@960 -- # wait 76548 00:11:25.802 21:18:48 -- target/tls.sh@37 -- # return 1 00:11:25.802 21:18:48 -- common/autotest_common.sh@653 -- # es=1 00:11:25.802 21:18:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.802 21:18:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.802 21:18:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.802 21:18:48 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:25.802 21:18:48 -- common/autotest_common.sh@650 -- # local es=0 00:11:25.802 21:18:48 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:25.802 21:18:48 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:25.802 21:18:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.802 21:18:48 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:25.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:25.802 21:18:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.802 21:18:48 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:25.802 21:18:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:25.802 21:18:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:25.802 21:18:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:25.802 21:18:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:25.802 21:18:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:25.802 21:18:48 -- target/tls.sh@28 -- # bdevperf_pid=76570 00:11:25.802 21:18:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:25.802 21:18:48 -- target/tls.sh@31 -- # waitforlisten 76570 /var/tmp/bdevperf.sock 00:11:25.802 21:18:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:25.802 21:18:48 -- common/autotest_common.sh@829 -- # '[' -z 76570 ']' 00:11:25.802 21:18:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:25.802 21:18:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.802 21:18:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:25.803 21:18:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.803 21:18:48 -- common/autotest_common.sh@10 -- # set +x 00:11:25.803 [2024-11-28 21:18:49.038637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:25.803 [2024-11-28 21:18:49.038966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76570 ] 00:11:25.803 [2024-11-28 21:18:49.171796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.803 [2024-11-28 21:18:49.205850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.370 21:18:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.370 21:18:50 -- common/autotest_common.sh@862 -- # return 0 00:11:26.370 21:18:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.629 [2024-11-28 21:18:50.259367] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:26.629 [2024-11-28 21:18:50.264973] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:26.629 [2024-11-28 21:18:50.265288] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:26.629 [2024-11-28 21:18:50.265540] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:26.629 [2024-11-28 21:18:50.265886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd62b80 (107): Transport endpoint is not connected 00:11:26.629 [2024-11-28 21:18:50.266375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd62b80 (9): Bad file descriptor 00:11:26.629 [2024-11-28 21:18:50.267369] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:26.629 [2024-11-28 21:18:50.267401] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:26.629 [2024-11-28 21:18:50.267412] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:26.629 request: 00:11:26.629 { 00:11:26.629 "name": "TLSTEST", 00:11:26.629 "trtype": "tcp", 00:11:26.629 "traddr": "10.0.0.2", 00:11:26.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:26.629 "adrfam": "ipv4", 00:11:26.629 "trsvcid": "4420", 00:11:26.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.629 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:26.629 "method": "bdev_nvme_attach_controller", 00:11:26.629 "req_id": 1 00:11:26.629 } 00:11:26.629 Got JSON-RPC error response 00:11:26.629 response: 00:11:26.629 { 00:11:26.629 "code": -32602, 00:11:26.629 "message": "Invalid parameters" 00:11:26.629 } 00:11:26.629 21:18:50 -- target/tls.sh@36 -- # killprocess 76570 00:11:26.629 21:18:50 -- common/autotest_common.sh@936 -- # '[' -z 76570 ']' 00:11:26.629 21:18:50 -- common/autotest_common.sh@940 -- # kill -0 76570 00:11:26.629 21:18:50 -- common/autotest_common.sh@941 -- # uname 00:11:26.629 21:18:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.629 21:18:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76570 00:11:26.629 killing process with pid 76570 00:11:26.629 Received shutdown signal, test time was about 10.000000 seconds 00:11:26.629 00:11:26.629 Latency(us) 00:11:26.629 [2024-11-28T21:18:50.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.629 [2024-11-28T21:18:50.372Z] =================================================================================================================== 00:11:26.629 [2024-11-28T21:18:50.372Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:26.629 21:18:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:26.629 21:18:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:26.629 21:18:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76570' 00:11:26.629 21:18:50 -- common/autotest_common.sh@955 -- # kill 76570 00:11:26.629 21:18:50 -- common/autotest_common.sh@960 -- # wait 76570 00:11:26.888 21:18:50 -- target/tls.sh@37 -- # return 1 00:11:26.888 21:18:50 -- common/autotest_common.sh@653 -- # es=1 00:11:26.888 21:18:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:26.888 21:18:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:26.888 21:18:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:26.888 21:18:50 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.888 21:18:50 -- common/autotest_common.sh@650 -- # local es=0 00:11:26.888 21:18:50 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.888 21:18:50 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:26.888 21:18:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.888 21:18:50 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:26.888 21:18:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.888 21:18:50 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.888 21:18:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:26.888 21:18:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:26.888 21:18:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:26.888 21:18:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:26.888 21:18:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:26.888 21:18:50 -- target/tls.sh@28 -- # bdevperf_pid=76598 00:11:26.888 21:18:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:26.888 21:18:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:26.888 21:18:50 -- target/tls.sh@31 -- # waitforlisten 76598 /var/tmp/bdevperf.sock 00:11:26.888 21:18:50 -- common/autotest_common.sh@829 -- # '[' -z 76598 ']' 00:11:26.888 21:18:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:26.888 21:18:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.888 21:18:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:26.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:26.888 21:18:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.888 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:11:26.888 [2024-11-28 21:18:50.518932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:26.888 [2024-11-28 21:18:50.519206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76598 ] 00:11:27.147 [2024-11-28 21:18:50.656589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.147 [2024-11-28 21:18:50.690456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.083 21:18:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.083 21:18:51 -- common/autotest_common.sh@862 -- # return 0 00:11:28.083 21:18:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:28.083 [2024-11-28 21:18:51.723454] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:28.083 [2024-11-28 21:18:51.735341] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:28.083 [2024-11-28 21:18:51.735514] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:28.083 [2024-11-28 21:18:51.735574] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:28.083 [2024-11-28 21:18:51.735687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:28.083 [2024-11-28 21:18:51.736474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cfb80 (9): Bad file descriptor 00:11:28.083 [2024-11-28 21:18:51.737469] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:28.083 [2024-11-28 21:18:51.737498] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:28.083 [2024-11-28 21:18:51.737510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:28.083 request: 00:11:28.083 { 00:11:28.083 "name": "TLSTEST", 00:11:28.083 "trtype": "tcp", 00:11:28.083 "traddr": "10.0.0.2", 00:11:28.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.083 "adrfam": "ipv4", 00:11:28.083 "trsvcid": "4420", 00:11:28.083 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.083 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:28.083 "method": "bdev_nvme_attach_controller", 00:11:28.083 "req_id": 1 00:11:28.083 } 00:11:28.083 Got JSON-RPC error response 00:11:28.083 response: 00:11:28.083 { 00:11:28.083 "code": -32602, 00:11:28.083 "message": "Invalid parameters" 00:11:28.084 } 00:11:28.084 21:18:51 -- target/tls.sh@36 -- # killprocess 76598 00:11:28.084 21:18:51 -- common/autotest_common.sh@936 -- # '[' -z 76598 ']' 00:11:28.084 21:18:51 -- common/autotest_common.sh@940 -- # kill -0 76598 00:11:28.084 21:18:51 -- common/autotest_common.sh@941 -- # uname 00:11:28.084 21:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.084 21:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76598 00:11:28.084 killing process with pid 76598 00:11:28.084 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.084 00:11:28.084 Latency(us) 00:11:28.084 [2024-11-28T21:18:51.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.084 [2024-11-28T21:18:51.827Z] =================================================================================================================== 00:11:28.084 [2024-11-28T21:18:51.827Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:28.084 21:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:28.084 21:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:28.084 21:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76598' 00:11:28.084 21:18:51 -- common/autotest_common.sh@955 -- # kill 76598 00:11:28.084 21:18:51 -- common/autotest_common.sh@960 -- # wait 76598 00:11:28.343 21:18:51 -- target/tls.sh@37 -- # return 1 00:11:28.343 21:18:51 -- common/autotest_common.sh@653 -- # es=1 00:11:28.343 21:18:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.343 21:18:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.343 21:18:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.343 21:18:51 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:28.343 21:18:51 -- common/autotest_common.sh@650 -- # local es=0 00:11:28.343 21:18:51 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:28.343 21:18:51 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:28.343 21:18:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.343 21:18:51 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:28.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.343 21:18:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.343 21:18:51 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:28.343 21:18:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:28.343 21:18:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:28.343 21:18:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:28.343 21:18:51 -- target/tls.sh@23 -- # psk= 00:11:28.343 21:18:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:28.343 21:18:51 -- target/tls.sh@28 -- # bdevperf_pid=76625 00:11:28.343 21:18:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:28.343 21:18:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:28.343 21:18:51 -- target/tls.sh@31 -- # waitforlisten 76625 /var/tmp/bdevperf.sock 00:11:28.343 21:18:51 -- common/autotest_common.sh@829 -- # '[' -z 76625 ']' 00:11:28.343 21:18:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.343 21:18:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.343 21:18:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.343 21:18:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.343 21:18:51 -- common/autotest_common.sh@10 -- # set +x 00:11:28.344 [2024-11-28 21:18:51.972260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:28.344 [2024-11-28 21:18:51.972791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76625 ] 00:11:28.603 [2024-11-28 21:18:52.110025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.603 [2024-11-28 21:18:52.144085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.603 21:18:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.603 21:18:52 -- common/autotest_common.sh@862 -- # return 0 00:11:28.603 21:18:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:28.863 [2024-11-28 21:18:52.475478] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:28.863 [2024-11-28 21:18:52.477160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb4450 (9): Bad file descriptor 00:11:28.863 [2024-11-28 21:18:52.478155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:28.863 [2024-11-28 21:18:52.478566] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:28.863 [2024-11-28 21:18:52.478784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:28.863 request: 00:11:28.863 { 00:11:28.863 "name": "TLSTEST", 00:11:28.863 "trtype": "tcp", 00:11:28.863 "traddr": "10.0.0.2", 00:11:28.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.863 "adrfam": "ipv4", 00:11:28.863 "trsvcid": "4420", 00:11:28.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.863 "method": "bdev_nvme_attach_controller", 00:11:28.863 "req_id": 1 00:11:28.863 } 00:11:28.863 Got JSON-RPC error response 00:11:28.863 response: 00:11:28.863 { 00:11:28.863 "code": -32602, 00:11:28.863 "message": "Invalid parameters" 00:11:28.863 } 00:11:28.863 21:18:52 -- target/tls.sh@36 -- # killprocess 76625 00:11:28.863 21:18:52 -- common/autotest_common.sh@936 -- # '[' -z 76625 ']' 00:11:28.863 21:18:52 -- common/autotest_common.sh@940 -- # kill -0 76625 00:11:28.863 21:18:52 -- common/autotest_common.sh@941 -- # uname 00:11:28.863 21:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.863 21:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76625 00:11:28.863 killing process with pid 76625 00:11:28.863 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.863 00:11:28.863 Latency(us) 00:11:28.863 [2024-11-28T21:18:52.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.863 [2024-11-28T21:18:52.606Z] =================================================================================================================== 00:11:28.863 [2024-11-28T21:18:52.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:28.863 21:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:28.863 21:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:28.863 21:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76625' 00:11:28.863 21:18:52 -- common/autotest_common.sh@955 -- # kill 76625 00:11:28.863 21:18:52 -- common/autotest_common.sh@960 -- # wait 76625 00:11:29.122 21:18:52 -- target/tls.sh@37 -- # return 1 00:11:29.122 21:18:52 -- common/autotest_common.sh@653 -- # es=1 00:11:29.122 21:18:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.122 21:18:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.122 21:18:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.122 21:18:52 -- target/tls.sh@167 -- # killprocess 76175 00:11:29.122 21:18:52 -- common/autotest_common.sh@936 -- # '[' -z 76175 ']' 00:11:29.122 21:18:52 -- common/autotest_common.sh@940 -- # kill -0 76175 00:11:29.123 21:18:52 -- common/autotest_common.sh@941 -- # uname 00:11:29.123 21:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:29.123 21:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76175 00:11:29.123 killing process with pid 76175 00:11:29.123 21:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:29.123 21:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:29.123 21:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76175' 00:11:29.123 21:18:52 -- common/autotest_common.sh@955 -- # kill 76175 00:11:29.123 21:18:52 -- common/autotest_common.sh@960 -- # wait 76175 00:11:29.123 21:18:52 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:29.123 21:18:52 -- target/tls.sh@49 -- # local key hash crc 00:11:29.123 21:18:52 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:29.123 21:18:52 -- target/tls.sh@51 -- # hash=02 00:11:29.382 21:18:52 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:29.382 21:18:52 -- target/tls.sh@52 -- # gzip -1 -c 00:11:29.382 21:18:52 -- target/tls.sh@52 -- # tail -c8 00:11:29.382 21:18:52 -- target/tls.sh@52 -- # head -c 4 00:11:29.382 21:18:52 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:29.382 21:18:52 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:29.382 21:18:52 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:29.382 21:18:52 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:29.382 21:18:52 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:29.382 21:18:52 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:29.382 21:18:52 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:29.382 21:18:52 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:29.382 21:18:52 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:29.382 21:18:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:29.382 21:18:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.382 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 21:18:52 -- nvmf/common.sh@469 -- # nvmfpid=76660 00:11:29.382 21:18:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:29.382 21:18:52 -- nvmf/common.sh@470 -- # waitforlisten 76660 00:11:29.382 21:18:52 -- common/autotest_common.sh@829 -- # '[' -z 76660 ']' 00:11:29.382 21:18:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.382 21:18:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.382 21:18:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.382 21:18:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.382 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 [2024-11-28 21:18:52.952676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:29.382 [2024-11-28 21:18:52.952811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.382 [2024-11-28 21:18:53.096344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.641 [2024-11-28 21:18:53.128641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.641 [2024-11-28 21:18:53.128786] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.641 [2024-11-28 21:18:53.128798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.641 [2024-11-28 21:18:53.128805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.641 [2024-11-28 21:18:53.128835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.209 21:18:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.209 21:18:53 -- common/autotest_common.sh@862 -- # return 0 00:11:30.209 21:18:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:30.209 21:18:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.209 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:11:30.209 21:18:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.209 21:18:53 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:30.209 21:18:53 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:30.209 21:18:53 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:30.468 [2024-11-28 21:18:54.142387] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.468 21:18:54 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:30.726 21:18:54 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:30.986 [2024-11-28 21:18:54.646558] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:30.986 [2024-11-28 21:18:54.646775] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.986 21:18:54 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:31.245 malloc0 00:11:31.245 21:18:54 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:31.527 21:18:55 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:31.786 21:18:55 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:31.786 21:18:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:31.786 21:18:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:31.786 21:18:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:31.786 21:18:55 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:31.786 21:18:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:31.786 21:18:55 -- target/tls.sh@28 -- # bdevperf_pid=76709 00:11:31.786 21:18:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:31.786 21:18:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:31.786 21:18:55 -- target/tls.sh@31 -- # waitforlisten 76709 /var/tmp/bdevperf.sock 00:11:31.786 21:18:55 -- common/autotest_common.sh@829 -- # '[' -z 76709 ']' 00:11:31.786 21:18:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.786 21:18:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.786 21:18:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.786 21:18:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.786 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:11:31.786 [2024-11-28 21:18:55.443055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:31.786 [2024-11-28 21:18:55.443443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76709 ] 00:11:32.044 [2024-11-28 21:18:55.590335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.044 [2024-11-28 21:18:55.632579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.982 21:18:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.982 21:18:56 -- common/autotest_common.sh@862 -- # return 0 00:11:32.982 21:18:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:32.982 [2024-11-28 21:18:56.678539] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:33.241 TLSTESTn1 00:11:33.241 21:18:56 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:33.241 Running I/O for 10 seconds... 00:11:43.252 00:11:43.252 Latency(us) 00:11:43.252 [2024-11-28T21:19:06.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.252 [2024-11-28T21:19:06.995Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:43.252 Verification LBA range: start 0x0 length 0x2000 00:11:43.252 TLSTESTn1 : 10.01 6098.92 23.82 0.00 0.00 20956.97 2710.81 20494.89 00:11:43.252 [2024-11-28T21:19:06.995Z] =================================================================================================================== 00:11:43.252 [2024-11-28T21:19:06.995Z] Total : 6098.92 23.82 0.00 0.00 20956.97 2710.81 20494.89 00:11:43.252 0 00:11:43.252 21:19:06 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.252 21:19:06 -- target/tls.sh@45 -- # killprocess 76709 00:11:43.252 21:19:06 -- common/autotest_common.sh@936 -- # '[' -z 76709 ']' 00:11:43.252 21:19:06 -- common/autotest_common.sh@940 -- # kill -0 76709 00:11:43.252 21:19:06 -- common/autotest_common.sh@941 -- # uname 00:11:43.511 21:19:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:43.511 21:19:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76709 00:11:43.511 killing process with pid 76709 00:11:43.511 Received shutdown signal, test time was about 10.000000 seconds 00:11:43.511 00:11:43.511 Latency(us) 00:11:43.511 [2024-11-28T21:19:07.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.511 [2024-11-28T21:19:07.254Z] =================================================================================================================== 00:11:43.511 [2024-11-28T21:19:07.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:43.511 21:19:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:43.511 21:19:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:43.511 21:19:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76709' 00:11:43.511 21:19:07 -- common/autotest_common.sh@955 -- # kill 76709 00:11:43.511 21:19:07 -- common/autotest_common.sh@960 -- # wait 76709 00:11:43.511 21:19:07 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.511 21:19:07 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.511 21:19:07 -- common/autotest_common.sh@650 -- # local es=0 00:11:43.511 21:19:07 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.511 21:19:07 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:43.511 21:19:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.511 21:19:07 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:43.511 21:19:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.511 21:19:07 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.511 21:19:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:43.511 21:19:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:43.511 21:19:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:43.511 21:19:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:43.511 21:19:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:43.511 21:19:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:43.511 21:19:07 -- target/tls.sh@28 -- # bdevperf_pid=76851 00:11:43.511 21:19:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:43.511 21:19:07 -- target/tls.sh@31 -- # waitforlisten 76851 /var/tmp/bdevperf.sock 00:11:43.511 21:19:07 -- common/autotest_common.sh@829 -- # '[' -z 76851 ']' 00:11:43.511 21:19:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.511 21:19:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.511 21:19:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.511 21:19:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.511 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 [2024-11-28 21:19:07.209426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:43.511 [2024-11-28 21:19:07.209822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76851 ] 00:11:43.770 [2024-11-28 21:19:07.345184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.770 [2024-11-28 21:19:07.382086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.770 21:19:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.770 21:19:07 -- common/autotest_common.sh@862 -- # return 0 00:11:43.770 21:19:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.029 [2024-11-28 21:19:07.728214] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:44.029 [2024-11-28 21:19:07.728798] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:44.029 request: 00:11:44.029 { 00:11:44.029 "name": "TLSTEST", 00:11:44.029 "trtype": "tcp", 00:11:44.029 "traddr": "10.0.0.2", 00:11:44.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:44.029 "adrfam": "ipv4", 00:11:44.029 "trsvcid": "4420", 00:11:44.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.029 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:44.029 "method": "bdev_nvme_attach_controller", 00:11:44.029 "req_id": 1 00:11:44.029 } 00:11:44.029 Got JSON-RPC error response 00:11:44.029 response: 00:11:44.029 { 00:11:44.029 "code": -22, 00:11:44.029 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:44.029 } 00:11:44.029 21:19:07 -- target/tls.sh@36 -- # killprocess 76851 00:11:44.029 21:19:07 -- common/autotest_common.sh@936 -- # '[' -z 76851 ']' 00:11:44.029 21:19:07 -- common/autotest_common.sh@940 -- # kill -0 76851 00:11:44.029 21:19:07 -- common/autotest_common.sh@941 -- # uname 00:11:44.029 21:19:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.029 21:19:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76851 00:11:44.289 killing process with pid 76851 00:11:44.289 Received shutdown signal, test time was about 10.000000 seconds 00:11:44.289 00:11:44.289 Latency(us) 00:11:44.289 [2024-11-28T21:19:08.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.289 [2024-11-28T21:19:08.032Z] =================================================================================================================== 00:11:44.289 [2024-11-28T21:19:08.032Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:44.289 21:19:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:44.289 21:19:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:44.289 21:19:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76851' 00:11:44.289 21:19:07 -- common/autotest_common.sh@955 -- # kill 76851 00:11:44.289 21:19:07 -- common/autotest_common.sh@960 -- # wait 76851 00:11:44.289 21:19:07 -- target/tls.sh@37 -- # return 1 00:11:44.289 21:19:07 -- common/autotest_common.sh@653 -- # es=1 00:11:44.289 21:19:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:44.289 21:19:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:44.289 21:19:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:44.289 21:19:07 -- target/tls.sh@183 -- # killprocess 76660 00:11:44.289 21:19:07 -- common/autotest_common.sh@936 -- # '[' -z 76660 ']' 00:11:44.289 21:19:07 -- common/autotest_common.sh@940 -- # kill -0 76660 00:11:44.289 21:19:07 -- common/autotest_common.sh@941 -- # uname 00:11:44.289 21:19:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.289 21:19:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76660 00:11:44.289 killing process with pid 76660 00:11:44.289 21:19:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:44.289 21:19:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:44.289 21:19:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76660' 00:11:44.289 21:19:07 -- common/autotest_common.sh@955 -- # kill 76660 00:11:44.289 21:19:07 -- common/autotest_common.sh@960 -- # wait 76660 00:11:44.549 21:19:08 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:44.549 21:19:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:44.549 21:19:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:44.549 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:11:44.549 21:19:08 -- nvmf/common.sh@469 -- # nvmfpid=76875 00:11:44.549 21:19:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:44.549 21:19:08 -- nvmf/common.sh@470 -- # waitforlisten 76875 00:11:44.549 21:19:08 -- common/autotest_common.sh@829 -- # '[' -z 76875 ']' 00:11:44.549 21:19:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.549 21:19:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.549 21:19:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.549 21:19:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.549 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:11:44.549 [2024-11-28 21:19:08.173019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:44.549 [2024-11-28 21:19:08.173303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.808 [2024-11-28 21:19:08.311087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.808 [2024-11-28 21:19:08.341944] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:44.808 [2024-11-28 21:19:08.342141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.808 [2024-11-28 21:19:08.342185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.808 [2024-11-28 21:19:08.342194] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.808 [2024-11-28 21:19:08.342238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.808 21:19:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.808 21:19:08 -- common/autotest_common.sh@862 -- # return 0 00:11:44.808 21:19:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:44.808 21:19:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.808 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:11:44.808 21:19:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.808 21:19:08 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.808 21:19:08 -- common/autotest_common.sh@650 -- # local es=0 00:11:44.808 21:19:08 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.808 21:19:08 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:44.808 21:19:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.808 21:19:08 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:44.808 21:19:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.808 21:19:08 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.808 21:19:08 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.808 21:19:08 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:45.067 [2024-11-28 21:19:08.745455] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.067 21:19:08 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:45.326 21:19:09 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:45.585 [2024-11-28 21:19:09.293667] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:45.585 [2024-11-28 21:19:09.293892] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.585 21:19:09 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:45.844 malloc0 00:11:46.103 21:19:09 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:46.362 21:19:09 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:46.362 [2024-11-28 21:19:10.088535] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:46.362 [2024-11-28 21:19:10.088841] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:46.362 [2024-11-28 21:19:10.088885] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:46.362 request: 00:11:46.362 { 00:11:46.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.362 "host": "nqn.2016-06.io.spdk:host1", 00:11:46.362 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:46.362 "method": "nvmf_subsystem_add_host", 00:11:46.362 "req_id": 1 00:11:46.362 } 00:11:46.362 Got JSON-RPC error response 00:11:46.362 response: 00:11:46.362 { 00:11:46.362 "code": -32603, 00:11:46.362 "message": "Internal error" 00:11:46.362 } 00:11:46.621 21:19:10 -- common/autotest_common.sh@653 -- # es=1 00:11:46.621 21:19:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.621 21:19:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.621 21:19:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.621 21:19:10 -- target/tls.sh@189 -- # killprocess 76875 00:11:46.621 21:19:10 -- common/autotest_common.sh@936 -- # '[' -z 76875 ']' 00:11:46.621 21:19:10 -- common/autotest_common.sh@940 -- # kill -0 76875 00:11:46.621 21:19:10 -- common/autotest_common.sh@941 -- # uname 00:11:46.621 21:19:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.621 21:19:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76875 00:11:46.621 killing process with pid 76875 00:11:46.621 21:19:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:46.621 21:19:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:46.621 21:19:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76875' 00:11:46.621 21:19:10 -- common/autotest_common.sh@955 -- # kill 76875 00:11:46.621 21:19:10 -- common/autotest_common.sh@960 -- # wait 76875 00:11:46.621 21:19:10 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:46.621 21:19:10 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:46.621 21:19:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:46.621 21:19:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.621 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:11:46.621 21:19:10 -- nvmf/common.sh@469 -- # nvmfpid=76927 00:11:46.621 21:19:10 -- nvmf/common.sh@470 -- # waitforlisten 76927 00:11:46.621 21:19:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:46.621 21:19:10 -- common/autotest_common.sh@829 -- # '[' -z 76927 ']' 00:11:46.621 21:19:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.621 21:19:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.621 21:19:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.621 21:19:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.621 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:11:46.621 [2024-11-28 21:19:10.335582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:46.621 [2024-11-28 21:19:10.335706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.880 [2024-11-28 21:19:10.475581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.880 [2024-11-28 21:19:10.505827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:46.880 [2024-11-28 21:19:10.505977] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.880 [2024-11-28 21:19:10.505989] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.880 [2024-11-28 21:19:10.505997] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.880 [2024-11-28 21:19:10.506053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.817 21:19:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.817 21:19:11 -- common/autotest_common.sh@862 -- # return 0 00:11:47.817 21:19:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:47.817 21:19:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.817 21:19:11 -- common/autotest_common.sh@10 -- # set +x 00:11:47.817 21:19:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.817 21:19:11 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:47.817 21:19:11 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:47.817 21:19:11 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:48.076 [2024-11-28 21:19:11.579495] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.076 21:19:11 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:48.077 21:19:11 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:48.335 [2024-11-28 21:19:12.067616] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:48.335 [2024-11-28 21:19:12.068059] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.594 21:19:12 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:48.594 malloc0 00:11:48.594 21:19:12 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:48.853 21:19:12 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:49.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:49.112 21:19:12 -- target/tls.sh@197 -- # bdevperf_pid=76982 00:11:49.112 21:19:12 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:49.112 21:19:12 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:49.112 21:19:12 -- target/tls.sh@200 -- # waitforlisten 76982 /var/tmp/bdevperf.sock 00:11:49.112 21:19:12 -- common/autotest_common.sh@829 -- # '[' -z 76982 ']' 00:11:49.112 21:19:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:49.112 21:19:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.112 21:19:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:49.112 21:19:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.112 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:11:49.112 [2024-11-28 21:19:12.817783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:49.112 [2024-11-28 21:19:12.818068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76982 ] 00:11:49.371 [2024-11-28 21:19:12.955606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.371 [2024-11-28 21:19:12.997131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.307 21:19:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.307 21:19:13 -- common/autotest_common.sh@862 -- # return 0 00:11:50.308 21:19:13 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:50.308 [2024-11-28 21:19:13.930033] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:50.308 TLSTESTn1 00:11:50.308 21:19:14 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:50.875 21:19:14 -- target/tls.sh@205 -- # tgtconf='{ 00:11:50.875 "subsystems": [ 00:11:50.875 { 00:11:50.875 "subsystem": "iobuf", 00:11:50.875 "config": [ 00:11:50.875 { 00:11:50.875 "method": "iobuf_set_options", 00:11:50.875 "params": { 00:11:50.875 "small_pool_count": 8192, 00:11:50.875 "large_pool_count": 1024, 00:11:50.875 "small_bufsize": 8192, 00:11:50.875 "large_bufsize": 135168 00:11:50.875 } 00:11:50.875 } 00:11:50.875 ] 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "subsystem": "sock", 00:11:50.875 "config": [ 00:11:50.875 { 00:11:50.875 "method": "sock_impl_set_options", 00:11:50.875 "params": { 00:11:50.875 "impl_name": "uring", 00:11:50.875 "recv_buf_size": 2097152, 00:11:50.875 "send_buf_size": 2097152, 00:11:50.875 "enable_recv_pipe": true, 00:11:50.875 "enable_quickack": false, 00:11:50.875 "enable_placement_id": 0, 00:11:50.875 "enable_zerocopy_send_server": false, 00:11:50.875 "enable_zerocopy_send_client": false, 00:11:50.875 "zerocopy_threshold": 0, 00:11:50.875 "tls_version": 0, 00:11:50.875 "enable_ktls": false 00:11:50.875 } 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "method": "sock_impl_set_options", 00:11:50.875 "params": { 00:11:50.875 "impl_name": "posix", 00:11:50.875 "recv_buf_size": 2097152, 00:11:50.875 "send_buf_size": 2097152, 00:11:50.875 "enable_recv_pipe": true, 00:11:50.875 "enable_quickack": false, 00:11:50.875 "enable_placement_id": 0, 00:11:50.875 "enable_zerocopy_send_server": true, 00:11:50.875 "enable_zerocopy_send_client": false, 00:11:50.875 "zerocopy_threshold": 0, 00:11:50.875 "tls_version": 0, 00:11:50.875 "enable_ktls": false 00:11:50.875 } 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "method": "sock_impl_set_options", 00:11:50.875 "params": { 00:11:50.875 "impl_name": "ssl", 00:11:50.875 "recv_buf_size": 4096, 00:11:50.875 "send_buf_size": 4096, 00:11:50.875 "enable_recv_pipe": true, 00:11:50.875 "enable_quickack": false, 00:11:50.875 "enable_placement_id": 0, 00:11:50.875 "enable_zerocopy_send_server": true, 00:11:50.875 "enable_zerocopy_send_client": false, 00:11:50.875 "zerocopy_threshold": 0, 00:11:50.875 "tls_version": 0, 00:11:50.875 "enable_ktls": false 00:11:50.875 } 00:11:50.875 } 00:11:50.875 ] 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "subsystem": "vmd", 00:11:50.875 "config": [] 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "subsystem": "accel", 00:11:50.875 "config": [ 00:11:50.875 { 00:11:50.875 "method": "accel_set_options", 00:11:50.875 "params": { 00:11:50.875 "small_cache_size": 128, 00:11:50.875 "large_cache_size": 16, 00:11:50.875 "task_count": 2048, 00:11:50.875 "sequence_count": 2048, 00:11:50.875 "buf_count": 2048 00:11:50.875 } 00:11:50.875 } 00:11:50.875 ] 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "subsystem": "bdev", 00:11:50.875 "config": [ 00:11:50.875 { 00:11:50.875 "method": "bdev_set_options", 00:11:50.875 "params": { 00:11:50.875 "bdev_io_pool_size": 65535, 00:11:50.875 "bdev_io_cache_size": 256, 00:11:50.875 "bdev_auto_examine": true, 00:11:50.875 "iobuf_small_cache_size": 128, 00:11:50.875 "iobuf_large_cache_size": 16 00:11:50.875 } 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "method": "bdev_raid_set_options", 00:11:50.875 "params": { 00:11:50.875 "process_window_size_kb": 1024 00:11:50.875 } 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "method": "bdev_iscsi_set_options", 00:11:50.875 "params": { 00:11:50.875 "timeout_sec": 30 00:11:50.875 } 00:11:50.875 }, 00:11:50.875 { 00:11:50.875 "method": "bdev_nvme_set_options", 00:11:50.875 "params": { 00:11:50.875 "action_on_timeout": "none", 00:11:50.875 "timeout_us": 0, 00:11:50.875 "timeout_admin_us": 0, 00:11:50.875 "keep_alive_timeout_ms": 10000, 00:11:50.875 "transport_retry_count": 4, 00:11:50.875 "arbitration_burst": 0, 00:11:50.875 "low_priority_weight": 0, 00:11:50.875 "medium_priority_weight": 0, 00:11:50.875 "high_priority_weight": 0, 00:11:50.875 "nvme_adminq_poll_period_us": 10000, 00:11:50.875 "nvme_ioq_poll_period_us": 0, 00:11:50.875 "io_queue_requests": 0, 00:11:50.875 "delay_cmd_submit": true, 00:11:50.876 "bdev_retry_count": 3, 00:11:50.876 "transport_ack_timeout": 0, 00:11:50.876 "ctrlr_loss_timeout_sec": 0, 00:11:50.876 "reconnect_delay_sec": 0, 00:11:50.876 "fast_io_fail_timeout_sec": 0, 00:11:50.876 "generate_uuids": false, 00:11:50.876 "transport_tos": 0, 00:11:50.876 "io_path_stat": false, 00:11:50.876 "allow_accel_sequence": false 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "bdev_nvme_set_hotplug", 00:11:50.876 "params": { 00:11:50.876 "period_us": 100000, 00:11:50.876 "enable": false 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "bdev_malloc_create", 00:11:50.876 "params": { 00:11:50.876 "name": "malloc0", 00:11:50.876 "num_blocks": 8192, 00:11:50.876 "block_size": 4096, 00:11:50.876 "physical_block_size": 4096, 00:11:50.876 "uuid": "e7237587-df6d-4f4b-a782-fd2ed1f8db15", 00:11:50.876 "optimal_io_boundary": 0 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "bdev_wait_for_examine" 00:11:50.876 } 00:11:50.876 ] 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "subsystem": "nbd", 00:11:50.876 "config": [] 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "subsystem": "scheduler", 00:11:50.876 "config": [ 00:11:50.876 { 00:11:50.876 "method": "framework_set_scheduler", 00:11:50.876 "params": { 00:11:50.876 "name": "static" 00:11:50.876 } 00:11:50.876 } 00:11:50.876 ] 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "subsystem": "nvmf", 00:11:50.876 "config": [ 00:11:50.876 { 00:11:50.876 "method": "nvmf_set_config", 00:11:50.876 "params": { 00:11:50.876 "discovery_filter": "match_any", 00:11:50.876 "admin_cmd_passthru": { 00:11:50.876 "identify_ctrlr": false 00:11:50.876 } 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_set_max_subsystems", 00:11:50.876 "params": { 00:11:50.876 "max_subsystems": 1024 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_set_crdt", 00:11:50.876 "params": { 00:11:50.876 "crdt1": 0, 00:11:50.876 "crdt2": 0, 00:11:50.876 "crdt3": 0 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_create_transport", 00:11:50.876 "params": { 00:11:50.876 "trtype": "TCP", 00:11:50.876 "max_queue_depth": 128, 00:11:50.876 "max_io_qpairs_per_ctrlr": 127, 00:11:50.876 "in_capsule_data_size": 4096, 00:11:50.876 "max_io_size": 131072, 00:11:50.876 "io_unit_size": 131072, 00:11:50.876 "max_aq_depth": 128, 00:11:50.876 "num_shared_buffers": 511, 00:11:50.876 "buf_cache_size": 4294967295, 00:11:50.876 "dif_insert_or_strip": false, 00:11:50.876 "zcopy": false, 00:11:50.876 "c2h_success": false, 00:11:50.876 "sock_priority": 0, 00:11:50.876 "abort_timeout_sec": 1 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_create_subsystem", 00:11:50.876 "params": { 00:11:50.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.876 "allow_any_host": false, 00:11:50.876 "serial_number": "SPDK00000000000001", 00:11:50.876 "model_number": "SPDK bdev Controller", 00:11:50.876 "max_namespaces": 10, 00:11:50.876 "min_cntlid": 1, 00:11:50.876 "max_cntlid": 65519, 00:11:50.876 "ana_reporting": false 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_subsystem_add_host", 00:11:50.876 "params": { 00:11:50.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.876 "host": "nqn.2016-06.io.spdk:host1", 00:11:50.876 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_subsystem_add_ns", 00:11:50.876 "params": { 00:11:50.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.876 "namespace": { 00:11:50.876 "nsid": 1, 00:11:50.876 "bdev_name": "malloc0", 00:11:50.876 "nguid": "E7237587DF6D4F4BA782FD2ED1F8DB15", 00:11:50.876 "uuid": "e7237587-df6d-4f4b-a782-fd2ed1f8db15" 00:11:50.876 } 00:11:50.876 } 00:11:50.876 }, 00:11:50.876 { 00:11:50.876 "method": "nvmf_subsystem_add_listener", 00:11:50.876 "params": { 00:11:50.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.876 "listen_address": { 00:11:50.876 "trtype": "TCP", 00:11:50.876 "adrfam": "IPv4", 00:11:50.876 "traddr": "10.0.0.2", 00:11:50.876 "trsvcid": "4420" 00:11:50.876 }, 00:11:50.876 "secure_channel": true 00:11:50.876 } 00:11:50.876 } 00:11:50.876 ] 00:11:50.876 } 00:11:50.876 ] 00:11:50.876 }' 00:11:50.876 21:19:14 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:51.136 21:19:14 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:51.136 "subsystems": [ 00:11:51.136 { 00:11:51.136 "subsystem": "iobuf", 00:11:51.136 "config": [ 00:11:51.136 { 00:11:51.136 "method": "iobuf_set_options", 00:11:51.136 "params": { 00:11:51.136 "small_pool_count": 8192, 00:11:51.136 "large_pool_count": 1024, 00:11:51.136 "small_bufsize": 8192, 00:11:51.136 "large_bufsize": 135168 00:11:51.136 } 00:11:51.136 } 00:11:51.136 ] 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "subsystem": "sock", 00:11:51.136 "config": [ 00:11:51.136 { 00:11:51.136 "method": "sock_impl_set_options", 00:11:51.136 "params": { 00:11:51.136 "impl_name": "uring", 00:11:51.136 "recv_buf_size": 2097152, 00:11:51.136 "send_buf_size": 2097152, 00:11:51.136 "enable_recv_pipe": true, 00:11:51.136 "enable_quickack": false, 00:11:51.136 "enable_placement_id": 0, 00:11:51.136 "enable_zerocopy_send_server": false, 00:11:51.136 "enable_zerocopy_send_client": false, 00:11:51.136 "zerocopy_threshold": 0, 00:11:51.136 "tls_version": 0, 00:11:51.136 "enable_ktls": false 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "sock_impl_set_options", 00:11:51.136 "params": { 00:11:51.136 "impl_name": "posix", 00:11:51.136 "recv_buf_size": 2097152, 00:11:51.136 "send_buf_size": 2097152, 00:11:51.136 "enable_recv_pipe": true, 00:11:51.136 "enable_quickack": false, 00:11:51.136 "enable_placement_id": 0, 00:11:51.136 "enable_zerocopy_send_server": true, 00:11:51.136 "enable_zerocopy_send_client": false, 00:11:51.136 "zerocopy_threshold": 0, 00:11:51.136 "tls_version": 0, 00:11:51.136 "enable_ktls": false 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "sock_impl_set_options", 00:11:51.136 "params": { 00:11:51.136 "impl_name": "ssl", 00:11:51.136 "recv_buf_size": 4096, 00:11:51.136 "send_buf_size": 4096, 00:11:51.136 "enable_recv_pipe": true, 00:11:51.136 "enable_quickack": false, 00:11:51.136 "enable_placement_id": 0, 00:11:51.136 "enable_zerocopy_send_server": true, 00:11:51.136 "enable_zerocopy_send_client": false, 00:11:51.136 "zerocopy_threshold": 0, 00:11:51.136 "tls_version": 0, 00:11:51.136 "enable_ktls": false 00:11:51.136 } 00:11:51.136 } 00:11:51.136 ] 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "subsystem": "vmd", 00:11:51.136 "config": [] 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "subsystem": "accel", 00:11:51.136 "config": [ 00:11:51.136 { 00:11:51.136 "method": "accel_set_options", 00:11:51.136 "params": { 00:11:51.136 "small_cache_size": 128, 00:11:51.136 "large_cache_size": 16, 00:11:51.136 "task_count": 2048, 00:11:51.136 "sequence_count": 2048, 00:11:51.136 "buf_count": 2048 00:11:51.136 } 00:11:51.136 } 00:11:51.136 ] 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "subsystem": "bdev", 00:11:51.136 "config": [ 00:11:51.136 { 00:11:51.136 "method": "bdev_set_options", 00:11:51.136 "params": { 00:11:51.136 "bdev_io_pool_size": 65535, 00:11:51.136 "bdev_io_cache_size": 256, 00:11:51.136 "bdev_auto_examine": true, 00:11:51.136 "iobuf_small_cache_size": 128, 00:11:51.136 "iobuf_large_cache_size": 16 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "bdev_raid_set_options", 00:11:51.136 "params": { 00:11:51.136 "process_window_size_kb": 1024 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "bdev_iscsi_set_options", 00:11:51.136 "params": { 00:11:51.136 "timeout_sec": 30 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "bdev_nvme_set_options", 00:11:51.136 "params": { 00:11:51.136 "action_on_timeout": "none", 00:11:51.136 "timeout_us": 0, 00:11:51.136 "timeout_admin_us": 0, 00:11:51.136 "keep_alive_timeout_ms": 10000, 00:11:51.136 "transport_retry_count": 4, 00:11:51.136 "arbitration_burst": 0, 00:11:51.136 "low_priority_weight": 0, 00:11:51.136 "medium_priority_weight": 0, 00:11:51.136 "high_priority_weight": 0, 00:11:51.136 "nvme_adminq_poll_period_us": 10000, 00:11:51.136 "nvme_ioq_poll_period_us": 0, 00:11:51.136 "io_queue_requests": 512, 00:11:51.136 "delay_cmd_submit": true, 00:11:51.136 "bdev_retry_count": 3, 00:11:51.136 "transport_ack_timeout": 0, 00:11:51.136 "ctrlr_loss_timeout_sec": 0, 00:11:51.136 "reconnect_delay_sec": 0, 00:11:51.136 "fast_io_fail_timeout_sec": 0, 00:11:51.136 "generate_uuids": false, 00:11:51.136 "transport_tos": 0, 00:11:51.136 "io_path_stat": false, 00:11:51.136 "allow_accel_sequence": false 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "bdev_nvme_attach_controller", 00:11:51.136 "params": { 00:11:51.136 "name": "TLSTEST", 00:11:51.136 "trtype": "TCP", 00:11:51.136 "adrfam": "IPv4", 00:11:51.136 "traddr": "10.0.0.2", 00:11:51.136 "trsvcid": "4420", 00:11:51.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.136 "prchk_reftag": false, 00:11:51.136 "prchk_guard": false, 00:11:51.136 "ctrlr_loss_timeout_sec": 0, 00:11:51.136 "reconnect_delay_sec": 0, 00:11:51.136 "fast_io_fail_timeout_sec": 0, 00:11:51.136 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:51.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:51.136 "hdgst": false, 00:11:51.136 "ddgst": false 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "bdev_nvme_set_hotplug", 00:11:51.136 "params": { 00:11:51.136 "period_us": 100000, 00:11:51.136 "enable": false 00:11:51.136 } 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "method": "bdev_wait_for_examine" 00:11:51.136 } 00:11:51.136 ] 00:11:51.136 }, 00:11:51.136 { 00:11:51.136 "subsystem": "nbd", 00:11:51.136 "config": [] 00:11:51.136 } 00:11:51.136 ] 00:11:51.136 }' 00:11:51.136 21:19:14 -- target/tls.sh@208 -- # killprocess 76982 00:11:51.137 21:19:14 -- common/autotest_common.sh@936 -- # '[' -z 76982 ']' 00:11:51.137 21:19:14 -- common/autotest_common.sh@940 -- # kill -0 76982 00:11:51.137 21:19:14 -- common/autotest_common.sh@941 -- # uname 00:11:51.137 21:19:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.137 21:19:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76982 00:11:51.137 killing process with pid 76982 00:11:51.137 Received shutdown signal, test time was about 10.000000 seconds 00:11:51.137 00:11:51.137 Latency(us) 00:11:51.137 [2024-11-28T21:19:14.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.137 [2024-11-28T21:19:14.880Z] =================================================================================================================== 00:11:51.137 [2024-11-28T21:19:14.880Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:51.137 21:19:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:51.137 21:19:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:51.137 21:19:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76982' 00:11:51.137 21:19:14 -- common/autotest_common.sh@955 -- # kill 76982 00:11:51.137 21:19:14 -- common/autotest_common.sh@960 -- # wait 76982 00:11:51.137 21:19:14 -- target/tls.sh@209 -- # killprocess 76927 00:11:51.137 21:19:14 -- common/autotest_common.sh@936 -- # '[' -z 76927 ']' 00:11:51.137 21:19:14 -- common/autotest_common.sh@940 -- # kill -0 76927 00:11:51.137 21:19:14 -- common/autotest_common.sh@941 -- # uname 00:11:51.137 21:19:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.137 21:19:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76927 00:11:51.137 killing process with pid 76927 00:11:51.137 21:19:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:51.137 21:19:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:51.137 21:19:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76927' 00:11:51.137 21:19:14 -- common/autotest_common.sh@955 -- # kill 76927 00:11:51.137 21:19:14 -- common/autotest_common.sh@960 -- # wait 76927 00:11:51.396 21:19:14 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:51.396 21:19:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:51.396 21:19:14 -- target/tls.sh@212 -- # echo '{ 00:11:51.396 "subsystems": [ 00:11:51.396 { 00:11:51.396 "subsystem": "iobuf", 00:11:51.396 "config": [ 00:11:51.396 { 00:11:51.396 "method": "iobuf_set_options", 00:11:51.396 "params": { 00:11:51.396 "small_pool_count": 8192, 00:11:51.396 "large_pool_count": 1024, 00:11:51.396 "small_bufsize": 8192, 00:11:51.396 "large_bufsize": 135168 00:11:51.396 } 00:11:51.396 } 00:11:51.396 ] 00:11:51.396 }, 00:11:51.396 { 00:11:51.396 "subsystem": "sock", 00:11:51.396 "config": [ 00:11:51.396 { 00:11:51.396 "method": "sock_impl_set_options", 00:11:51.396 "params": { 00:11:51.396 "impl_name": "uring", 00:11:51.396 "recv_buf_size": 2097152, 00:11:51.396 "send_buf_size": 2097152, 00:11:51.396 "enable_recv_pipe": true, 00:11:51.396 "enable_quickack": false, 00:11:51.396 "enable_placement_id": 0, 00:11:51.396 "enable_zerocopy_send_server": false, 00:11:51.396 "enable_zerocopy_send_client": false, 00:11:51.396 "zerocopy_threshold": 0, 00:11:51.396 "tls_version": 0, 00:11:51.396 "enable_ktls": false 00:11:51.396 } 00:11:51.396 }, 00:11:51.396 { 00:11:51.396 "method": "sock_impl_set_options", 00:11:51.396 "params": { 00:11:51.396 "impl_name": "posix", 00:11:51.396 "recv_buf_size": 2097152, 00:11:51.396 "send_buf_size": 2097152, 00:11:51.396 "enable_recv_pipe": true, 00:11:51.396 "enable_quickack": false, 00:11:51.396 "enable_placement_id": 0, 00:11:51.396 "enable_zerocopy_send_server": true, 00:11:51.396 "enable_zerocopy_send_client": false, 00:11:51.396 "zerocopy_threshold": 0, 00:11:51.396 "tls_version": 0, 00:11:51.396 "enable_ktls": false 00:11:51.396 } 00:11:51.396 }, 00:11:51.397 { 00:11:51.397 "method": "sock_impl_set_options", 00:11:51.397 "params": { 00:11:51.397 "impl_name": "ssl", 00:11:51.397 "recv_buf_size": 4096, 00:11:51.397 "send_buf_size": 4096, 00:11:51.397 "enable_recv_pipe": true, 00:11:51.397 "enable_quickack": false, 00:11:51.397 "enable_placement_id": 0, 00:11:51.397 "enable_zerocopy_send_server": true, 00:11:51.397 "enable_zerocopy_send_client": false, 00:11:51.397 "zerocopy_threshold": 0, 00:11:51.397 "tls_version": 0, 00:11:51.397 "enable_ktls": false 00:11:51.397 } 00:11:51.397 } 00:11:51.397 ] 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "subsystem": "vmd", 00:11:51.397 "config": [] 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "subsystem": "accel", 00:11:51.397 "config": [ 00:11:51.397 { 00:11:51.397 "method": "accel_set_options", 00:11:51.397 "params": { 00:11:51.397 "small_cache_size": 128, 00:11:51.397 "large_cache_size": 16, 00:11:51.397 "task_count": 2048, 00:11:51.397 "sequence_count": 2048, 00:11:51.397 "buf_count": 2048 00:11:51.397 } 00:11:51.397 } 00:11:51.397 ] 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "subsystem": "bdev", 00:11:51.397 "config": [ 00:11:51.397 { 00:11:51.397 "method": "bdev_set_options", 00:11:51.397 "params": { 00:11:51.397 "bdev_io_pool_size": 65535, 00:11:51.397 "bdev_io_cache_size": 256, 00:11:51.397 "bdev_auto_examine": true, 00:11:51.397 "iobuf_small_cache_size": 128, 00:11:51.397 "iobuf_large_cache_size": 16 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "bdev_raid_set_options", 00:11:51.397 "params": { 00:11:51.397 "process_window_size_kb": 1024 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "bdev_iscsi_set_options", 00:11:51.397 "params": { 00:11:51.397 "timeout_sec": 30 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "bdev_nvme_set_options", 00:11:51.397 "params": { 00:11:51.397 "action_on_timeout": "none", 00:11:51.397 "timeout_us": 0, 00:11:51.397 "timeout_admin_us": 0, 00:11:51.397 "keep_alive_timeout_ms": 10000, 00:11:51.397 "transport_retry_count": 4, 00:11:51.397 "arbitration_burst": 0, 00:11:51.397 "low_priority_weight": 0, 00:11:51.397 "medium_priority_weight": 0, 00:11:51.397 "high_priority_weight": 0, 00:11:51.397 "nvme_adminq_poll_period_us": 10000, 00:11:51.397 "nvme_ioq_poll_period_us": 0, 00:11:51.397 "io_queue_requests": 0, 00:11:51.397 "delay_cmd_submit": true, 00:11:51.397 "bdev_retry_count": 3, 00:11:51.397 "transport_ack_timeout": 0, 00:11:51.397 "ctrlr_loss_timeout_sec": 0, 00:11:51.397 "reconnect_delay_sec": 0, 00:11:51.397 "fast_io_fail_timeout_sec": 0, 00:11:51.397 "generate_uuids": false, 00:11:51.397 "transport_tos": 0, 00:11:51.397 "io_path_stat": false, 00:11:51.397 "allow_accel_sequence": false 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "bdev_nvme_set_hotplug", 00:11:51.397 "params": { 00:11:51.397 "period_us": 100000, 00:11:51.397 "enable": false 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "bdev_malloc_create", 00:11:51.397 "params": { 00:11:51.397 "name": "malloc0", 00:11:51.397 "num_blocks": 8192, 00:11:51.397 "block_size": 4096, 00:11:51.397 "physical_block_size": 4096, 00:11:51.397 "uuid": "e7237587-df6d-4f4b-a782-fd2ed1f8db15", 00:11:51.397 "optimal_io_boundary": 0 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "bdev_wait_for_examine" 00:11:51.397 } 00:11:51.397 ] 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "subsystem": "nbd", 00:11:51.397 "config": [] 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "subsystem": "scheduler", 00:11:51.397 "config": [ 00:11:51.397 { 00:11:51.397 "method": "framework_set_scheduler", 00:11:51.397 "params": { 00:11:51.397 "name": "static" 00:11:51.397 } 00:11:51.397 } 00:11:51.397 ] 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "subsystem": "nvmf", 00:11:51.397 "config": [ 00:11:51.397 { 00:11:51.397 "method": "nvmf_set_config", 00:11:51.397 "params": { 00:11:51.397 "discovery_filter": "match_any", 00:11:51.397 "admin_cmd_passthru": { 00:11:51.397 "identify_ctrlr": false 00:11:51.397 } 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_set_max_subsystems", 00:11:51.397 "params": { 00:11:51.397 "max_subsystems": 1024 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_set_crdt", 00:11:51.397 "params": { 00:11:51.397 "crdt1": 0, 00:11:51.397 "crdt2": 0, 00:11:51.397 "crdt3": 0 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_create_transport", 00:11:51.397 "params": { 00:11:51.397 "trtype": "TCP", 00:11:51.397 "max_queue_depth": 128, 00:11:51.397 "max_io_qpairs_per_ctrlr": 127, 00:11:51.397 "in_capsule_data_size": 4096, 00:11:51.397 "max_io_size": 131072, 00:11:51.397 "io_unit_size": 131072, 00:11:51.397 "max_aq_depth": 128, 00:11:51.397 "num_shared_buffers": 511, 00:11:51.397 "buf_cache_size": 4294967295, 00:11:51.397 "dif_insert_or_strip": false, 00:11:51.397 "zcopy": false, 00:11:51.397 "c2h_success": false, 00:11:51.397 "sock_priority": 0, 00:11:51.397 "abort_timeout_sec": 1 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_create_subsystem", 00:11:51.397 "params": { 00:11:51.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.397 "allow_any_host": false, 00:11:51.397 "serial_number": "SPDK00000000000001", 00:11:51.397 "model_number": "SPDK bdev Controller", 00:11:51.397 "max_namespaces": 10, 00:11:51.397 "min_cntlid": 1, 00:11:51.397 "max_cntlid": 65519, 00:11:51.397 "ana_reporting": false 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_subsystem_add_host", 00:11:51.397 "params": { 00:11:51.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.397 "host": "nqn.2016-06.io.spdk:host1", 00:11:51.397 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_subsystem_add_ns", 00:11:51.397 "params": { 00:11:51.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.397 "namespace": { 00:11:51.397 "nsid": 1, 00:11:51.397 "bdev_name": "malloc0", 00:11:51.397 "nguid": "E7237587DF6D4F4BA782FD2ED1F8DB15", 00:11:51.397 "uuid": "e7237587-df6d-4f4b-a782-fd2ed1f8db15" 00:11:51.397 } 00:11:51.397 } 00:11:51.397 }, 00:11:51.397 { 00:11:51.397 "method": "nvmf_subsystem_add_listener", 00:11:51.397 "params": { 00:11:51.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.397 "listen_address": { 00:11:51.397 "trtype": "TCP", 00:11:51.397 "adrfam": "IPv4", 00:11:51.397 "traddr": "10.0.0.2", 00:11:51.397 "trsvcid": "4420" 00:11:51.397 }, 00:11:51.397 "secure_channel": true 00:11:51.397 } 00:11:51.397 } 00:11:51.397 ] 00:11:51.397 } 00:11:51.397 ] 00:11:51.397 }' 00:11:51.397 21:19:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.397 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:11:51.397 21:19:14 -- nvmf/common.sh@469 -- # nvmfpid=77031 00:11:51.397 21:19:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:51.397 21:19:14 -- nvmf/common.sh@470 -- # waitforlisten 77031 00:11:51.397 21:19:14 -- common/autotest_common.sh@829 -- # '[' -z 77031 ']' 00:11:51.397 21:19:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.397 21:19:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.397 21:19:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.398 21:19:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.398 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:11:51.398 [2024-11-28 21:19:15.039518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:51.398 [2024-11-28 21:19:15.039649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.655 [2024-11-28 21:19:15.172823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.655 [2024-11-28 21:19:15.203994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:51.655 [2024-11-28 21:19:15.204193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.655 [2024-11-28 21:19:15.204206] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.655 [2024-11-28 21:19:15.204214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.655 [2024-11-28 21:19:15.204237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.655 [2024-11-28 21:19:15.381407] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.913 [2024-11-28 21:19:15.413369] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:51.913 [2024-11-28 21:19:15.413576] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.481 21:19:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.481 21:19:16 -- common/autotest_common.sh@862 -- # return 0 00:11:52.481 21:19:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:52.481 21:19:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:52.481 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:11:52.481 21:19:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.481 21:19:16 -- target/tls.sh@216 -- # bdevperf_pid=77064 00:11:52.481 21:19:16 -- target/tls.sh@217 -- # waitforlisten 77064 /var/tmp/bdevperf.sock 00:11:52.481 21:19:16 -- common/autotest_common.sh@829 -- # '[' -z 77064 ']' 00:11:52.481 21:19:16 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:52.481 21:19:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.481 21:19:16 -- target/tls.sh@213 -- # echo '{ 00:11:52.481 "subsystems": [ 00:11:52.481 { 00:11:52.481 "subsystem": "iobuf", 00:11:52.481 "config": [ 00:11:52.481 { 00:11:52.481 "method": "iobuf_set_options", 00:11:52.481 "params": { 00:11:52.481 "small_pool_count": 8192, 00:11:52.481 "large_pool_count": 1024, 00:11:52.481 "small_bufsize": 8192, 00:11:52.481 "large_bufsize": 135168 00:11:52.481 } 00:11:52.481 } 00:11:52.481 ] 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "subsystem": "sock", 00:11:52.481 "config": [ 00:11:52.481 { 00:11:52.481 "method": "sock_impl_set_options", 00:11:52.481 "params": { 00:11:52.481 "impl_name": "uring", 00:11:52.481 "recv_buf_size": 2097152, 00:11:52.481 "send_buf_size": 2097152, 00:11:52.481 "enable_recv_pipe": true, 00:11:52.481 "enable_quickack": false, 00:11:52.481 "enable_placement_id": 0, 00:11:52.481 "enable_zerocopy_send_server": false, 00:11:52.481 "enable_zerocopy_send_client": false, 00:11:52.481 "zerocopy_threshold": 0, 00:11:52.481 "tls_version": 0, 00:11:52.481 "enable_ktls": false 00:11:52.481 } 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "method": "sock_impl_set_options", 00:11:52.481 "params": { 00:11:52.481 "impl_name": "posix", 00:11:52.481 "recv_buf_size": 2097152, 00:11:52.481 "send_buf_size": 2097152, 00:11:52.481 "enable_recv_pipe": true, 00:11:52.481 "enable_quickack": false, 00:11:52.481 "enable_placement_id": 0, 00:11:52.481 "enable_zerocopy_send_server": true, 00:11:52.481 "enable_zerocopy_send_client": false, 00:11:52.481 "zerocopy_threshold": 0, 00:11:52.481 "tls_version": 0, 00:11:52.481 "enable_ktls": false 00:11:52.481 } 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "method": "sock_impl_set_options", 00:11:52.481 "params": { 00:11:52.481 "impl_name": "ssl", 00:11:52.481 "recv_buf_size": 4096, 00:11:52.481 "send_buf_size": 4096, 00:11:52.481 "enable_recv_pipe": true, 00:11:52.481 "enable_quickack": false, 00:11:52.481 "enable_placement_id": 0, 00:11:52.481 "enable_zerocopy_send_server": true, 00:11:52.481 "enable_zerocopy_send_client": false, 00:11:52.481 "zerocopy_threshold": 0, 00:11:52.481 "tls_version": 0, 00:11:52.481 "enable_ktls": false 00:11:52.481 } 00:11:52.481 } 00:11:52.481 ] 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "subsystem": "vmd", 00:11:52.481 "config": [] 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "subsystem": "accel", 00:11:52.481 "config": [ 00:11:52.481 { 00:11:52.481 "method": "accel_set_options", 00:11:52.481 "params": { 00:11:52.481 "small_cache_size": 128, 00:11:52.481 "large_cache_size": 16, 00:11:52.481 "task_count": 2048, 00:11:52.481 "sequence_count": 2048, 00:11:52.481 "buf_count": 2048 00:11:52.481 } 00:11:52.481 } 00:11:52.481 ] 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "subsystem": "bdev", 00:11:52.481 "config": [ 00:11:52.481 { 00:11:52.481 "method": "bdev_set_options", 00:11:52.482 "params": { 00:11:52.482 "bdev_io_pool_size": 65535, 00:11:52.482 "bdev_io_cache_size": 256, 00:11:52.482 "bdev_auto_examine": true, 00:11:52.482 "iobuf_small_cache_size": 128, 00:11:52.482 "iobuf_large_cache_size": 16 00:11:52.482 } 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "method": "bdev_raid_set_options", 00:11:52.482 "params": { 00:11:52.482 "process_window_size_kb": 1024 00:11:52.482 } 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "method": "bdev_iscsi_set_options", 00:11:52.482 "params": { 00:11:52.482 "timeout_sec": 30 00:11:52.482 } 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "method": "bdev_nvme_set_options", 00:11:52.482 "params": { 00:11:52.482 "action_on_timeout": "none", 00:11:52.482 "timeout_us": 0, 00:11:52.482 "timeout_admin_us": 0, 00:11:52.482 "keep_alive_timeout_ms": 10000, 00:11:52.482 "transport_retry_count": 4, 00:11:52.482 "arbitration_burst": 0, 00:11:52.482 "low_priority_weight": 0, 00:11:52.482 "medium_priority_weight": 0, 00:11:52.482 "high_priority_weight": 0, 00:11:52.482 "nvme_adminq_poll_period_us": 10000, 00:11:52.482 "nvme_ioq_poll_period_us": 0, 00:11:52.482 "io_queue_requests": 512, 00:11:52.482 "delay_cmd_submit": true, 00:11:52.482 "bdev_retry_count": 3, 00:11:52.482 "transport_ack_timeout": 0, 00:11:52.482 "ctrlr_loss_timeout_sec": 0, 00:11:52.482 "reconnect_delay_sec": 0, 00:11:52.482 "fast_io_fail_timeout_sec": 0, 00:11:52.482 "generate_uuids": false, 00:11:52.482 "transport_tos": 0, 00:11:52.482 "io_path_stat": false, 00:11:52.482 "allow_accel_sequence": false 00:11:52.482 } 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "method": "bdev_nvme_attach_controller", 00:11:52.482 "params": { 00:11:52.482 "name": "TLSTEST", 00:11:52.482 "trty 21:19:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.482 pe": "TCP", 00:11:52.482 "adrfam": "IPv4", 00:11:52.482 "traddr": "10.0.0.2", 00:11:52.482 "trsvcid": "4420", 00:11:52.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.482 "prchk_reftag": false, 00:11:52.482 "prchk_guard": false, 00:11:52.482 "ctrlr_loss_timeout_sec": 0, 00:11:52.482 "reconnect_delay_sec": 0, 00:11:52.482 "fast_io_fail_timeout_sec": 0, 00:11:52.482 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:52.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.482 "hdgst": false, 00:11:52.482 "ddgst": false 00:11:52.482 } 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "method": "bdev_nvme_set_hotplug", 00:11:52.482 "params": { 00:11:52.482 "period_us": 100000, 00:11:52.482 "enable": false 00:11:52.482 } 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "method": "bdev_wait_for_examine" 00:11:52.482 } 00:11:52.482 ] 00:11:52.482 }, 00:11:52.482 { 00:11:52.482 "subsystem": "nbd", 00:11:52.482 "config": [] 00:11:52.482 } 00:11:52.482 ] 00:11:52.482 }' 00:11:52.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.482 21:19:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.482 21:19:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.482 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:11:52.482 [2024-11-28 21:19:16.088363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:52.482 [2024-11-28 21:19:16.088454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77064 ] 00:11:52.741 [2024-11-28 21:19:16.226022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.741 [2024-11-28 21:19:16.267800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.741 [2024-11-28 21:19:16.395792] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:53.695 21:19:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.695 21:19:17 -- common/autotest_common.sh@862 -- # return 0 00:11:53.695 21:19:17 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:53.695 Running I/O for 10 seconds... 00:12:03.704 00:12:03.704 Latency(us) 00:12:03.704 [2024-11-28T21:19:27.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.704 [2024-11-28T21:19:27.447Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:03.704 Verification LBA range: start 0x0 length 0x2000 00:12:03.704 TLSTESTn1 : 10.01 5799.17 22.65 0.00 0.00 22041.05 2129.92 27525.12 00:12:03.704 [2024-11-28T21:19:27.447Z] =================================================================================================================== 00:12:03.704 [2024-11-28T21:19:27.447Z] Total : 5799.17 22.65 0.00 0.00 22041.05 2129.92 27525.12 00:12:03.704 0 00:12:03.704 21:19:27 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:03.704 21:19:27 -- target/tls.sh@223 -- # killprocess 77064 00:12:03.704 21:19:27 -- common/autotest_common.sh@936 -- # '[' -z 77064 ']' 00:12:03.704 21:19:27 -- common/autotest_common.sh@940 -- # kill -0 77064 00:12:03.704 21:19:27 -- common/autotest_common.sh@941 -- # uname 00:12:03.704 21:19:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.704 21:19:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77064 00:12:03.704 21:19:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:03.704 killing process with pid 77064 00:12:03.704 21:19:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:03.704 21:19:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77064' 00:12:03.704 21:19:27 -- common/autotest_common.sh@955 -- # kill 77064 00:12:03.704 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.704 00:12:03.704 Latency(us) 00:12:03.704 [2024-11-28T21:19:27.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.704 [2024-11-28T21:19:27.447Z] =================================================================================================================== 00:12:03.704 [2024-11-28T21:19:27.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.704 21:19:27 -- common/autotest_common.sh@960 -- # wait 77064 00:12:03.704 21:19:27 -- target/tls.sh@224 -- # killprocess 77031 00:12:03.704 21:19:27 -- common/autotest_common.sh@936 -- # '[' -z 77031 ']' 00:12:03.704 21:19:27 -- common/autotest_common.sh@940 -- # kill -0 77031 00:12:03.704 21:19:27 -- common/autotest_common.sh@941 -- # uname 00:12:03.704 21:19:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.704 21:19:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77031 00:12:03.704 21:19:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:03.704 21:19:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:03.704 killing process with pid 77031 00:12:03.704 21:19:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77031' 00:12:03.704 21:19:27 -- common/autotest_common.sh@955 -- # kill 77031 00:12:03.704 21:19:27 -- common/autotest_common.sh@960 -- # wait 77031 00:12:03.963 21:19:27 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:03.963 21:19:27 -- target/tls.sh@227 -- # cleanup 00:12:03.963 21:19:27 -- target/tls.sh@15 -- # process_shm --id 0 00:12:03.963 21:19:27 -- common/autotest_common.sh@806 -- # type=--id 00:12:03.963 21:19:27 -- common/autotest_common.sh@807 -- # id=0 00:12:03.963 21:19:27 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:03.963 21:19:27 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:03.963 21:19:27 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:03.963 21:19:27 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:03.963 21:19:27 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:03.963 21:19:27 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:03.963 nvmf_trace.0 00:12:03.963 21:19:27 -- common/autotest_common.sh@821 -- # return 0 00:12:03.963 21:19:27 -- target/tls.sh@16 -- # killprocess 77064 00:12:03.963 21:19:27 -- common/autotest_common.sh@936 -- # '[' -z 77064 ']' 00:12:03.963 21:19:27 -- common/autotest_common.sh@940 -- # kill -0 77064 00:12:03.963 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77064) - No such process 00:12:03.963 Process with pid 77064 is not found 00:12:03.963 21:19:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77064 is not found' 00:12:03.963 21:19:27 -- target/tls.sh@17 -- # nvmftestfini 00:12:03.963 21:19:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:03.963 21:19:27 -- nvmf/common.sh@116 -- # sync 00:12:03.963 21:19:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:03.963 21:19:27 -- nvmf/common.sh@119 -- # set +e 00:12:03.963 21:19:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:03.963 21:19:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:03.963 rmmod nvme_tcp 00:12:03.963 rmmod nvme_fabrics 00:12:03.963 rmmod nvme_keyring 00:12:04.222 21:19:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:04.222 21:19:27 -- nvmf/common.sh@123 -- # set -e 00:12:04.222 21:19:27 -- nvmf/common.sh@124 -- # return 0 00:12:04.222 21:19:27 -- nvmf/common.sh@477 -- # '[' -n 77031 ']' 00:12:04.222 21:19:27 -- nvmf/common.sh@478 -- # killprocess 77031 00:12:04.222 21:19:27 -- common/autotest_common.sh@936 -- # '[' -z 77031 ']' 00:12:04.222 21:19:27 -- common/autotest_common.sh@940 -- # kill -0 77031 00:12:04.222 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77031) - No such process 00:12:04.222 Process with pid 77031 is not found 00:12:04.222 21:19:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77031 is not found' 00:12:04.222 21:19:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:04.222 21:19:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:04.222 21:19:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:04.222 21:19:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.222 21:19:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:04.222 21:19:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.222 21:19:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.222 21:19:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.222 21:19:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:04.222 21:19:27 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:04.222 00:12:04.222 real 1m7.687s 00:12:04.222 user 1m44.879s 00:12:04.222 sys 0m23.516s 00:12:04.222 21:19:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.222 ************************************ 00:12:04.222 END TEST nvmf_tls 00:12:04.222 21:19:27 -- common/autotest_common.sh@10 -- # set +x 00:12:04.222 ************************************ 00:12:04.222 21:19:27 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:04.222 21:19:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:04.222 21:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.222 21:19:27 -- common/autotest_common.sh@10 -- # set +x 00:12:04.222 ************************************ 00:12:04.222 START TEST nvmf_fips 00:12:04.222 ************************************ 00:12:04.222 21:19:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:04.222 * Looking for test storage... 00:12:04.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:04.222 21:19:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:04.222 21:19:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:04.222 21:19:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:04.222 21:19:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:04.222 21:19:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:04.222 21:19:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:04.222 21:19:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:04.222 21:19:27 -- scripts/common.sh@335 -- # IFS=.-: 00:12:04.222 21:19:27 -- scripts/common.sh@335 -- # read -ra ver1 00:12:04.222 21:19:27 -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.222 21:19:27 -- scripts/common.sh@336 -- # read -ra ver2 00:12:04.482 21:19:27 -- scripts/common.sh@337 -- # local 'op=<' 00:12:04.482 21:19:27 -- scripts/common.sh@339 -- # ver1_l=2 00:12:04.482 21:19:27 -- scripts/common.sh@340 -- # ver2_l=1 00:12:04.482 21:19:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:04.482 21:19:27 -- scripts/common.sh@343 -- # case "$op" in 00:12:04.482 21:19:27 -- scripts/common.sh@344 -- # : 1 00:12:04.482 21:19:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:04.482 21:19:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.482 21:19:27 -- scripts/common.sh@364 -- # decimal 1 00:12:04.482 21:19:27 -- scripts/common.sh@352 -- # local d=1 00:12:04.482 21:19:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.482 21:19:27 -- scripts/common.sh@354 -- # echo 1 00:12:04.482 21:19:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:04.482 21:19:27 -- scripts/common.sh@365 -- # decimal 2 00:12:04.482 21:19:27 -- scripts/common.sh@352 -- # local d=2 00:12:04.482 21:19:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.482 21:19:27 -- scripts/common.sh@354 -- # echo 2 00:12:04.482 21:19:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:04.482 21:19:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:04.482 21:19:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:04.482 21:19:27 -- scripts/common.sh@367 -- # return 0 00:12:04.482 21:19:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.482 21:19:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.482 --rc genhtml_branch_coverage=1 00:12:04.482 --rc genhtml_function_coverage=1 00:12:04.482 --rc genhtml_legend=1 00:12:04.482 --rc geninfo_all_blocks=1 00:12:04.482 --rc geninfo_unexecuted_blocks=1 00:12:04.482 00:12:04.482 ' 00:12:04.482 21:19:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.482 --rc genhtml_branch_coverage=1 00:12:04.482 --rc genhtml_function_coverage=1 00:12:04.482 --rc genhtml_legend=1 00:12:04.482 --rc geninfo_all_blocks=1 00:12:04.482 --rc geninfo_unexecuted_blocks=1 00:12:04.482 00:12:04.482 ' 00:12:04.482 21:19:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.482 --rc genhtml_branch_coverage=1 00:12:04.482 --rc genhtml_function_coverage=1 00:12:04.482 --rc genhtml_legend=1 00:12:04.482 --rc geninfo_all_blocks=1 00:12:04.482 --rc geninfo_unexecuted_blocks=1 00:12:04.482 00:12:04.482 ' 00:12:04.482 21:19:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.482 --rc genhtml_branch_coverage=1 00:12:04.482 --rc genhtml_function_coverage=1 00:12:04.482 --rc genhtml_legend=1 00:12:04.482 --rc geninfo_all_blocks=1 00:12:04.482 --rc geninfo_unexecuted_blocks=1 00:12:04.482 00:12:04.482 ' 00:12:04.482 21:19:27 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.482 21:19:27 -- nvmf/common.sh@7 -- # uname -s 00:12:04.482 21:19:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.483 21:19:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.483 21:19:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.483 21:19:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.483 21:19:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.483 21:19:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.483 21:19:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.483 21:19:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.483 21:19:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.483 21:19:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.483 21:19:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:12:04.483 21:19:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:12:04.483 21:19:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.483 21:19:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.483 21:19:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.483 21:19:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.483 21:19:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.483 21:19:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.483 21:19:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.483 21:19:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.483 21:19:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.483 21:19:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.483 21:19:27 -- paths/export.sh@5 -- # export PATH 00:12:04.483 21:19:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.483 21:19:27 -- nvmf/common.sh@46 -- # : 0 00:12:04.483 21:19:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:04.483 21:19:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:04.483 21:19:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:04.483 21:19:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.483 21:19:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.483 21:19:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:04.483 21:19:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:04.483 21:19:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:04.483 21:19:28 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.483 21:19:28 -- fips/fips.sh@89 -- # check_openssl_version 00:12:04.483 21:19:28 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:04.483 21:19:28 -- fips/fips.sh@85 -- # openssl version 00:12:04.483 21:19:28 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:04.483 21:19:28 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:12:04.483 21:19:28 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:04.483 21:19:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:04.483 21:19:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:04.483 21:19:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:04.483 21:19:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:04.483 21:19:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.483 21:19:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:04.483 21:19:28 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:04.483 21:19:28 -- scripts/common.sh@339 -- # ver1_l=3 00:12:04.483 21:19:28 -- scripts/common.sh@340 -- # ver2_l=3 00:12:04.483 21:19:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:04.483 21:19:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:04.483 21:19:28 -- scripts/common.sh@347 -- # : 1 00:12:04.483 21:19:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:04.483 21:19:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.483 21:19:28 -- scripts/common.sh@364 -- # decimal 3 00:12:04.483 21:19:28 -- scripts/common.sh@352 -- # local d=3 00:12:04.483 21:19:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:04.483 21:19:28 -- scripts/common.sh@354 -- # echo 3 00:12:04.483 21:19:28 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:04.483 21:19:28 -- scripts/common.sh@365 -- # decimal 3 00:12:04.483 21:19:28 -- scripts/common.sh@352 -- # local d=3 00:12:04.483 21:19:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:04.483 21:19:28 -- scripts/common.sh@354 -- # echo 3 00:12:04.483 21:19:28 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:04.483 21:19:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:04.483 21:19:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:04.483 21:19:28 -- scripts/common.sh@363 -- # (( v++ )) 00:12:04.483 21:19:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.483 21:19:28 -- scripts/common.sh@364 -- # decimal 1 00:12:04.483 21:19:28 -- scripts/common.sh@352 -- # local d=1 00:12:04.483 21:19:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.483 21:19:28 -- scripts/common.sh@354 -- # echo 1 00:12:04.483 21:19:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:04.483 21:19:28 -- scripts/common.sh@365 -- # decimal 0 00:12:04.483 21:19:28 -- scripts/common.sh@352 -- # local d=0 00:12:04.483 21:19:28 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:04.483 21:19:28 -- scripts/common.sh@354 -- # echo 0 00:12:04.483 21:19:28 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:04.483 21:19:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:04.483 21:19:28 -- scripts/common.sh@366 -- # return 0 00:12:04.483 21:19:28 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:04.483 21:19:28 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:04.483 21:19:28 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:04.483 21:19:28 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:04.483 21:19:28 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:04.483 21:19:28 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:04.483 21:19:28 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:04.483 21:19:28 -- fips/fips.sh@113 -- # build_openssl_config 00:12:04.483 21:19:28 -- fips/fips.sh@37 -- # cat 00:12:04.483 21:19:28 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:04.483 21:19:28 -- fips/fips.sh@58 -- # cat - 00:12:04.483 21:19:28 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:04.483 21:19:28 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:04.483 21:19:28 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:04.483 21:19:28 -- fips/fips.sh@116 -- # openssl list -providers 00:12:04.483 21:19:28 -- fips/fips.sh@116 -- # grep name 00:12:04.483 21:19:28 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:04.483 21:19:28 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:04.483 21:19:28 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:04.483 21:19:28 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:04.483 21:19:28 -- common/autotest_common.sh@650 -- # local es=0 00:12:04.483 21:19:28 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:04.483 21:19:28 -- common/autotest_common.sh@638 -- # local arg=openssl 00:12:04.483 21:19:28 -- fips/fips.sh@127 -- # : 00:12:04.483 21:19:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.483 21:19:28 -- common/autotest_common.sh@642 -- # type -t openssl 00:12:04.483 21:19:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.483 21:19:28 -- common/autotest_common.sh@644 -- # type -P openssl 00:12:04.483 21:19:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.483 21:19:28 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:12:04.483 21:19:28 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:12:04.483 21:19:28 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:12:04.483 Error setting digest 00:12:04.483 4072C40B1E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:04.483 4072C40B1E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:04.483 21:19:28 -- common/autotest_common.sh@653 -- # es=1 00:12:04.483 21:19:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.483 21:19:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.483 21:19:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.483 21:19:28 -- fips/fips.sh@130 -- # nvmftestinit 00:12:04.483 21:19:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:04.483 21:19:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.483 21:19:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:04.483 21:19:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:04.483 21:19:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:04.483 21:19:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.483 21:19:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.483 21:19:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.483 21:19:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:04.483 21:19:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:04.483 21:19:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:04.483 21:19:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:04.483 21:19:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:04.483 21:19:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:04.483 21:19:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.483 21:19:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.484 21:19:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:04.484 21:19:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:04.484 21:19:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.484 21:19:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.484 21:19:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.484 21:19:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.484 21:19:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.484 21:19:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.484 21:19:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.484 21:19:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.484 21:19:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:04.484 21:19:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:04.484 Cannot find device "nvmf_tgt_br" 00:12:04.484 21:19:28 -- nvmf/common.sh@154 -- # true 00:12:04.484 21:19:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.484 Cannot find device "nvmf_tgt_br2" 00:12:04.484 21:19:28 -- nvmf/common.sh@155 -- # true 00:12:04.484 21:19:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:04.484 21:19:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:04.484 Cannot find device "nvmf_tgt_br" 00:12:04.484 21:19:28 -- nvmf/common.sh@157 -- # true 00:12:04.484 21:19:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:04.484 Cannot find device "nvmf_tgt_br2" 00:12:04.484 21:19:28 -- nvmf/common.sh@158 -- # true 00:12:04.484 21:19:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:04.744 21:19:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:04.744 21:19:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.744 21:19:28 -- nvmf/common.sh@161 -- # true 00:12:04.744 21:19:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.744 21:19:28 -- nvmf/common.sh@162 -- # true 00:12:04.744 21:19:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.744 21:19:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.744 21:19:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.744 21:19:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.744 21:19:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.744 21:19:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:04.744 21:19:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:04.744 21:19:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:04.744 21:19:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:04.744 21:19:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:04.744 21:19:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:04.744 21:19:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:04.744 21:19:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:04.744 21:19:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.744 21:19:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.744 21:19:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.744 21:19:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:04.744 21:19:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:04.744 21:19:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.744 21:19:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.744 21:19:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.744 21:19:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.744 21:19:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.744 21:19:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:05.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:12:05.002 00:12:05.002 --- 10.0.0.2 ping statistics --- 00:12:05.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.002 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:05.002 21:19:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:05.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:05.002 00:12:05.002 --- 10.0.0.3 ping statistics --- 00:12:05.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.002 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:05.002 21:19:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:05.002 00:12:05.002 --- 10.0.0.1 ping statistics --- 00:12:05.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.002 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:05.002 21:19:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.002 21:19:28 -- nvmf/common.sh@421 -- # return 0 00:12:05.002 21:19:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:05.002 21:19:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.002 21:19:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:05.002 21:19:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:05.002 21:19:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.002 21:19:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:05.002 21:19:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:05.002 21:19:28 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:05.002 21:19:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:05.002 21:19:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.002 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:12:05.002 21:19:28 -- nvmf/common.sh@469 -- # nvmfpid=77413 00:12:05.002 21:19:28 -- nvmf/common.sh@470 -- # waitforlisten 77413 00:12:05.002 21:19:28 -- common/autotest_common.sh@829 -- # '[' -z 77413 ']' 00:12:05.002 21:19:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.002 21:19:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:05.002 21:19:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.003 21:19:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.003 21:19:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.003 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:12:05.003 [2024-11-28 21:19:28.586937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:05.003 [2024-11-28 21:19:28.587061] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.003 [2024-11-28 21:19:28.721192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.261 [2024-11-28 21:19:28.753666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.261 [2024-11-28 21:19:28.753814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.261 [2024-11-28 21:19:28.753826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.261 [2024-11-28 21:19:28.753835] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.261 [2024-11-28 21:19:28.753863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.194 21:19:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.194 21:19:29 -- common/autotest_common.sh@862 -- # return 0 00:12:06.194 21:19:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:06.194 21:19:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.194 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:12:06.194 21:19:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.194 21:19:29 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:06.194 21:19:29 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:06.194 21:19:29 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:06.194 21:19:29 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:06.194 21:19:29 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:06.194 21:19:29 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:06.194 21:19:29 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:06.194 21:19:29 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.194 [2024-11-28 21:19:29.897132] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.194 [2024-11-28 21:19:29.913092] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:06.194 [2024-11-28 21:19:29.913290] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.452 malloc0 00:12:06.452 21:19:29 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:06.452 21:19:29 -- fips/fips.sh@147 -- # bdevperf_pid=77447 00:12:06.452 21:19:29 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:06.452 21:19:29 -- fips/fips.sh@148 -- # waitforlisten 77447 /var/tmp/bdevperf.sock 00:12:06.452 21:19:29 -- common/autotest_common.sh@829 -- # '[' -z 77447 ']' 00:12:06.452 21:19:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:06.452 21:19:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:06.452 21:19:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:06.452 21:19:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.452 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:12:06.452 [2024-11-28 21:19:30.028958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:06.452 [2024-11-28 21:19:30.029050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77447 ] 00:12:06.452 [2024-11-28 21:19:30.160781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.711 [2024-11-28 21:19:30.197463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.277 21:19:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.277 21:19:30 -- common/autotest_common.sh@862 -- # return 0 00:12:07.277 21:19:30 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:07.535 [2024-11-28 21:19:31.201946] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:07.535 TLSTESTn1 00:12:07.793 21:19:31 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:07.793 Running I/O for 10 seconds... 00:12:17.766 00:12:17.766 Latency(us) 00:12:17.766 [2024-11-28T21:19:41.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.766 [2024-11-28T21:19:41.509Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:17.766 Verification LBA range: start 0x0 length 0x2000 00:12:17.767 TLSTESTn1 : 10.01 5565.51 21.74 0.00 0.00 22960.01 4319.42 28120.90 00:12:17.767 [2024-11-28T21:19:41.510Z] =================================================================================================================== 00:12:17.767 [2024-11-28T21:19:41.510Z] Total : 5565.51 21.74 0.00 0.00 22960.01 4319.42 28120.90 00:12:17.767 0 00:12:17.767 21:19:41 -- fips/fips.sh@1 -- # cleanup 00:12:17.767 21:19:41 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:17.767 21:19:41 -- common/autotest_common.sh@806 -- # type=--id 00:12:17.767 21:19:41 -- common/autotest_common.sh@807 -- # id=0 00:12:17.767 21:19:41 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:17.767 21:19:41 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:17.767 21:19:41 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:17.767 21:19:41 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:17.767 21:19:41 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:17.767 21:19:41 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:17.767 nvmf_trace.0 00:12:17.767 21:19:41 -- common/autotest_common.sh@821 -- # return 0 00:12:17.767 21:19:41 -- fips/fips.sh@16 -- # killprocess 77447 00:12:17.767 21:19:41 -- common/autotest_common.sh@936 -- # '[' -z 77447 ']' 00:12:17.767 21:19:41 -- common/autotest_common.sh@940 -- # kill -0 77447 00:12:17.767 21:19:41 -- common/autotest_common.sh@941 -- # uname 00:12:17.767 21:19:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:17.767 21:19:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77447 00:12:18.025 killing process with pid 77447 00:12:18.025 Received shutdown signal, test time was about 10.000000 seconds 00:12:18.025 00:12:18.025 Latency(us) 00:12:18.025 [2024-11-28T21:19:41.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.025 [2024-11-28T21:19:41.768Z] =================================================================================================================== 00:12:18.025 [2024-11-28T21:19:41.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:18.025 21:19:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:18.025 21:19:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:18.025 21:19:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77447' 00:12:18.025 21:19:41 -- common/autotest_common.sh@955 -- # kill 77447 00:12:18.025 21:19:41 -- common/autotest_common.sh@960 -- # wait 77447 00:12:18.025 21:19:41 -- fips/fips.sh@17 -- # nvmftestfini 00:12:18.025 21:19:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.025 21:19:41 -- nvmf/common.sh@116 -- # sync 00:12:18.025 21:19:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.025 21:19:41 -- nvmf/common.sh@119 -- # set +e 00:12:18.025 21:19:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.025 21:19:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.025 rmmod nvme_tcp 00:12:18.025 rmmod nvme_fabrics 00:12:18.025 rmmod nvme_keyring 00:12:18.283 21:19:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.283 21:19:41 -- nvmf/common.sh@123 -- # set -e 00:12:18.283 21:19:41 -- nvmf/common.sh@124 -- # return 0 00:12:18.283 21:19:41 -- nvmf/common.sh@477 -- # '[' -n 77413 ']' 00:12:18.283 21:19:41 -- nvmf/common.sh@478 -- # killprocess 77413 00:12:18.283 21:19:41 -- common/autotest_common.sh@936 -- # '[' -z 77413 ']' 00:12:18.283 21:19:41 -- common/autotest_common.sh@940 -- # kill -0 77413 00:12:18.283 21:19:41 -- common/autotest_common.sh@941 -- # uname 00:12:18.283 21:19:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:18.283 21:19:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77413 00:12:18.283 killing process with pid 77413 00:12:18.283 21:19:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:18.283 21:19:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:18.283 21:19:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77413' 00:12:18.283 21:19:41 -- common/autotest_common.sh@955 -- # kill 77413 00:12:18.283 21:19:41 -- common/autotest_common.sh@960 -- # wait 77413 00:12:18.283 21:19:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.283 21:19:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.283 21:19:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.283 21:19:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.283 21:19:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.283 21:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.283 21:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.283 21:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.283 21:19:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:18.283 21:19:41 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:18.283 ************************************ 00:12:18.283 END TEST nvmf_fips 00:12:18.283 ************************************ 00:12:18.283 00:12:18.283 real 0m14.183s 00:12:18.283 user 0m18.892s 00:12:18.283 sys 0m5.883s 00:12:18.283 21:19:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:18.283 21:19:41 -- common/autotest_common.sh@10 -- # set +x 00:12:18.542 21:19:42 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:18.542 21:19:42 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:18.542 21:19:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.542 21:19:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.542 21:19:42 -- common/autotest_common.sh@10 -- # set +x 00:12:18.542 ************************************ 00:12:18.542 START TEST nvmf_fuzz 00:12:18.542 ************************************ 00:12:18.542 21:19:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:18.542 * Looking for test storage... 00:12:18.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.542 21:19:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:18.542 21:19:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:18.542 21:19:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:18.542 21:19:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:18.542 21:19:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:18.542 21:19:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:18.542 21:19:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:18.542 21:19:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:18.542 21:19:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:18.542 21:19:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.542 21:19:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:18.542 21:19:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:18.542 21:19:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:18.542 21:19:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:18.542 21:19:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:18.542 21:19:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:18.542 21:19:42 -- scripts/common.sh@344 -- # : 1 00:12:18.542 21:19:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:18.542 21:19:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.542 21:19:42 -- scripts/common.sh@364 -- # decimal 1 00:12:18.542 21:19:42 -- scripts/common.sh@352 -- # local d=1 00:12:18.542 21:19:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.542 21:19:42 -- scripts/common.sh@354 -- # echo 1 00:12:18.542 21:19:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:18.542 21:19:42 -- scripts/common.sh@365 -- # decimal 2 00:12:18.542 21:19:42 -- scripts/common.sh@352 -- # local d=2 00:12:18.542 21:19:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.542 21:19:42 -- scripts/common.sh@354 -- # echo 2 00:12:18.542 21:19:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:18.542 21:19:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:18.542 21:19:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:18.542 21:19:42 -- scripts/common.sh@367 -- # return 0 00:12:18.542 21:19:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.542 21:19:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.542 --rc genhtml_branch_coverage=1 00:12:18.542 --rc genhtml_function_coverage=1 00:12:18.542 --rc genhtml_legend=1 00:12:18.542 --rc geninfo_all_blocks=1 00:12:18.542 --rc geninfo_unexecuted_blocks=1 00:12:18.543 00:12:18.543 ' 00:12:18.543 21:19:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.543 --rc genhtml_branch_coverage=1 00:12:18.543 --rc genhtml_function_coverage=1 00:12:18.543 --rc genhtml_legend=1 00:12:18.543 --rc geninfo_all_blocks=1 00:12:18.543 --rc geninfo_unexecuted_blocks=1 00:12:18.543 00:12:18.543 ' 00:12:18.543 21:19:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.543 --rc genhtml_branch_coverage=1 00:12:18.543 --rc genhtml_function_coverage=1 00:12:18.543 --rc genhtml_legend=1 00:12:18.543 --rc geninfo_all_blocks=1 00:12:18.543 --rc geninfo_unexecuted_blocks=1 00:12:18.543 00:12:18.543 ' 00:12:18.543 21:19:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.543 --rc genhtml_branch_coverage=1 00:12:18.543 --rc genhtml_function_coverage=1 00:12:18.543 --rc genhtml_legend=1 00:12:18.543 --rc geninfo_all_blocks=1 00:12:18.543 --rc geninfo_unexecuted_blocks=1 00:12:18.543 00:12:18.543 ' 00:12:18.543 21:19:42 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.543 21:19:42 -- nvmf/common.sh@7 -- # uname -s 00:12:18.543 21:19:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.543 21:19:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.543 21:19:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.543 21:19:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.543 21:19:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.543 21:19:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.543 21:19:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.543 21:19:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.543 21:19:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.543 21:19:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.543 21:19:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:12:18.543 21:19:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:12:18.543 21:19:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.543 21:19:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.543 21:19:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.543 21:19:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.543 21:19:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.543 21:19:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.543 21:19:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.543 21:19:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.543 21:19:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.543 21:19:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.543 21:19:42 -- paths/export.sh@5 -- # export PATH 00:12:18.543 21:19:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.543 21:19:42 -- nvmf/common.sh@46 -- # : 0 00:12:18.543 21:19:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:18.543 21:19:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:18.543 21:19:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:18.543 21:19:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.543 21:19:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.543 21:19:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:18.543 21:19:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:18.543 21:19:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:18.543 21:19:42 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:18.543 21:19:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:18.543 21:19:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.543 21:19:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:18.543 21:19:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:18.543 21:19:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:18.543 21:19:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.543 21:19:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.543 21:19:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.543 21:19:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:18.543 21:19:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:18.543 21:19:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:18.543 21:19:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:18.543 21:19:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:18.543 21:19:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:18.543 21:19:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.543 21:19:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.543 21:19:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.543 21:19:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:18.543 21:19:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.543 21:19:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.543 21:19:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.543 21:19:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.543 21:19:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.543 21:19:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.543 21:19:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.543 21:19:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.543 21:19:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:18.802 21:19:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:18.802 Cannot find device "nvmf_tgt_br" 00:12:18.802 21:19:42 -- nvmf/common.sh@154 -- # true 00:12:18.802 21:19:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.802 Cannot find device "nvmf_tgt_br2" 00:12:18.802 21:19:42 -- nvmf/common.sh@155 -- # true 00:12:18.802 21:19:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:18.802 21:19:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:18.802 Cannot find device "nvmf_tgt_br" 00:12:18.802 21:19:42 -- nvmf/common.sh@157 -- # true 00:12:18.802 21:19:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:18.802 Cannot find device "nvmf_tgt_br2" 00:12:18.802 21:19:42 -- nvmf/common.sh@158 -- # true 00:12:18.802 21:19:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:18.802 21:19:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:18.802 21:19:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.802 21:19:42 -- nvmf/common.sh@161 -- # true 00:12:18.802 21:19:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.802 21:19:42 -- nvmf/common.sh@162 -- # true 00:12:18.802 21:19:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.802 21:19:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.802 21:19:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.802 21:19:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.802 21:19:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.802 21:19:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.802 21:19:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.802 21:19:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:18.802 21:19:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:18.802 21:19:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:18.802 21:19:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:18.802 21:19:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:18.802 21:19:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:18.802 21:19:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.802 21:19:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.802 21:19:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.802 21:19:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:18.802 21:19:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:18.802 21:19:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.802 21:19:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.802 21:19:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.802 21:19:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.802 21:19:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.061 21:19:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:19.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:19.061 00:12:19.061 --- 10.0.0.2 ping statistics --- 00:12:19.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.061 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:19.061 21:19:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:19.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:19.061 00:12:19.061 --- 10.0.0.3 ping statistics --- 00:12:19.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.061 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:19.061 21:19:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:19.061 00:12:19.061 --- 10.0.0.1 ping statistics --- 00:12:19.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.061 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:19.061 21:19:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.061 21:19:42 -- nvmf/common.sh@421 -- # return 0 00:12:19.061 21:19:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.061 21:19:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.061 21:19:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.061 21:19:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.061 21:19:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.061 21:19:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.061 21:19:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.061 21:19:42 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77785 00:12:19.061 21:19:42 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:19.061 21:19:42 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:19.061 21:19:42 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77785 00:12:19.061 21:19:42 -- common/autotest_common.sh@829 -- # '[' -z 77785 ']' 00:12:19.061 21:19:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.061 21:19:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.061 21:19:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.061 21:19:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.061 21:19:42 -- common/autotest_common.sh@10 -- # set +x 00:12:19.998 21:19:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.998 21:19:43 -- common/autotest_common.sh@862 -- # return 0 00:12:19.998 21:19:43 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.998 21:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.998 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:12:19.998 21:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.998 21:19:43 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:19.998 21:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.998 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:12:19.998 Malloc0 00:12:19.998 21:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.998 21:19:43 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:19.998 21:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.998 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:12:19.998 21:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.998 21:19:43 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.998 21:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.998 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:12:19.998 21:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.998 21:19:43 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.998 21:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.998 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:12:19.998 21:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.999 21:19:43 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:19.999 21:19:43 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:20.258 Shutting down the fuzz application 00:12:20.258 21:19:43 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:20.516 Shutting down the fuzz application 00:12:20.516 21:19:44 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.516 21:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.516 21:19:44 -- common/autotest_common.sh@10 -- # set +x 00:12:20.516 21:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.516 21:19:44 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:20.516 21:19:44 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:20.516 21:19:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:20.516 21:19:44 -- nvmf/common.sh@116 -- # sync 00:12:20.776 21:19:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:20.776 21:19:44 -- nvmf/common.sh@119 -- # set +e 00:12:20.776 21:19:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:20.776 21:19:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:20.776 rmmod nvme_tcp 00:12:20.776 rmmod nvme_fabrics 00:12:20.776 rmmod nvme_keyring 00:12:20.776 21:19:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:20.776 21:19:44 -- nvmf/common.sh@123 -- # set -e 00:12:20.776 21:19:44 -- nvmf/common.sh@124 -- # return 0 00:12:20.776 21:19:44 -- nvmf/common.sh@477 -- # '[' -n 77785 ']' 00:12:20.776 21:19:44 -- nvmf/common.sh@478 -- # killprocess 77785 00:12:20.776 21:19:44 -- common/autotest_common.sh@936 -- # '[' -z 77785 ']' 00:12:20.776 21:19:44 -- common/autotest_common.sh@940 -- # kill -0 77785 00:12:20.776 21:19:44 -- common/autotest_common.sh@941 -- # uname 00:12:20.776 21:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:20.776 21:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77785 00:12:20.776 21:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:20.776 21:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:20.776 killing process with pid 77785 00:12:20.776 21:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77785' 00:12:20.776 21:19:44 -- common/autotest_common.sh@955 -- # kill 77785 00:12:20.776 21:19:44 -- common/autotest_common.sh@960 -- # wait 77785 00:12:21.035 21:19:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:21.035 21:19:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:21.035 21:19:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:21.035 21:19:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.036 21:19:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:21.036 21:19:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.036 21:19:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.036 21:19:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.036 21:19:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:21.036 21:19:44 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:21.036 00:12:21.036 real 0m2.556s 00:12:21.036 user 0m2.678s 00:12:21.036 sys 0m0.560s 00:12:21.036 21:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.036 ************************************ 00:12:21.036 END TEST nvmf_fuzz 00:12:21.036 ************************************ 00:12:21.036 21:19:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.036 21:19:44 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:21.036 21:19:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:21.036 21:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.036 21:19:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.036 ************************************ 00:12:21.036 START TEST nvmf_multiconnection 00:12:21.036 ************************************ 00:12:21.036 21:19:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:21.036 * Looking for test storage... 00:12:21.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.036 21:19:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:21.036 21:19:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:21.036 21:19:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:21.296 21:19:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:21.296 21:19:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:21.296 21:19:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:21.296 21:19:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:21.296 21:19:44 -- scripts/common.sh@335 -- # IFS=.-: 00:12:21.296 21:19:44 -- scripts/common.sh@335 -- # read -ra ver1 00:12:21.296 21:19:44 -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.296 21:19:44 -- scripts/common.sh@336 -- # read -ra ver2 00:12:21.296 21:19:44 -- scripts/common.sh@337 -- # local 'op=<' 00:12:21.296 21:19:44 -- scripts/common.sh@339 -- # ver1_l=2 00:12:21.296 21:19:44 -- scripts/common.sh@340 -- # ver2_l=1 00:12:21.296 21:19:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:21.296 21:19:44 -- scripts/common.sh@343 -- # case "$op" in 00:12:21.296 21:19:44 -- scripts/common.sh@344 -- # : 1 00:12:21.296 21:19:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:21.296 21:19:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.296 21:19:44 -- scripts/common.sh@364 -- # decimal 1 00:12:21.296 21:19:44 -- scripts/common.sh@352 -- # local d=1 00:12:21.296 21:19:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.296 21:19:44 -- scripts/common.sh@354 -- # echo 1 00:12:21.296 21:19:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:21.296 21:19:44 -- scripts/common.sh@365 -- # decimal 2 00:12:21.296 21:19:44 -- scripts/common.sh@352 -- # local d=2 00:12:21.296 21:19:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.296 21:19:44 -- scripts/common.sh@354 -- # echo 2 00:12:21.296 21:19:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:21.296 21:19:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:21.296 21:19:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:21.296 21:19:44 -- scripts/common.sh@367 -- # return 0 00:12:21.296 21:19:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.296 21:19:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:21.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.296 --rc genhtml_branch_coverage=1 00:12:21.296 --rc genhtml_function_coverage=1 00:12:21.296 --rc genhtml_legend=1 00:12:21.296 --rc geninfo_all_blocks=1 00:12:21.296 --rc geninfo_unexecuted_blocks=1 00:12:21.296 00:12:21.296 ' 00:12:21.296 21:19:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:21.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.296 --rc genhtml_branch_coverage=1 00:12:21.296 --rc genhtml_function_coverage=1 00:12:21.296 --rc genhtml_legend=1 00:12:21.296 --rc geninfo_all_blocks=1 00:12:21.296 --rc geninfo_unexecuted_blocks=1 00:12:21.296 00:12:21.296 ' 00:12:21.296 21:19:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:21.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.296 --rc genhtml_branch_coverage=1 00:12:21.296 --rc genhtml_function_coverage=1 00:12:21.296 --rc genhtml_legend=1 00:12:21.296 --rc geninfo_all_blocks=1 00:12:21.296 --rc geninfo_unexecuted_blocks=1 00:12:21.296 00:12:21.296 ' 00:12:21.296 21:19:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:21.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.296 --rc genhtml_branch_coverage=1 00:12:21.296 --rc genhtml_function_coverage=1 00:12:21.296 --rc genhtml_legend=1 00:12:21.296 --rc geninfo_all_blocks=1 00:12:21.296 --rc geninfo_unexecuted_blocks=1 00:12:21.296 00:12:21.296 ' 00:12:21.296 21:19:44 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.296 21:19:44 -- nvmf/common.sh@7 -- # uname -s 00:12:21.296 21:19:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.296 21:19:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.296 21:19:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.296 21:19:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.296 21:19:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.296 21:19:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.296 21:19:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.296 21:19:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.296 21:19:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.296 21:19:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.296 21:19:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:12:21.296 21:19:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:12:21.296 21:19:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.296 21:19:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.296 21:19:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.296 21:19:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.296 21:19:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.296 21:19:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.296 21:19:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.296 21:19:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 21:19:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 21:19:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 21:19:44 -- paths/export.sh@5 -- # export PATH 00:12:21.297 21:19:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.297 21:19:44 -- nvmf/common.sh@46 -- # : 0 00:12:21.297 21:19:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:21.297 21:19:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:21.297 21:19:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:21.297 21:19:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.297 21:19:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.297 21:19:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:21.297 21:19:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:21.297 21:19:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:21.297 21:19:44 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.297 21:19:44 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.297 21:19:44 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:21.297 21:19:44 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:21.297 21:19:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:21.297 21:19:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.297 21:19:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:21.297 21:19:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:21.297 21:19:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:21.297 21:19:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.297 21:19:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.297 21:19:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.297 21:19:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:21.297 21:19:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:21.297 21:19:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:21.297 21:19:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:21.297 21:19:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:21.297 21:19:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:21.297 21:19:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.297 21:19:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.297 21:19:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.297 21:19:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:21.297 21:19:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.297 21:19:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.297 21:19:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.297 21:19:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.297 21:19:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.297 21:19:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.297 21:19:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.297 21:19:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.297 21:19:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:21.297 21:19:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:21.297 Cannot find device "nvmf_tgt_br" 00:12:21.297 21:19:44 -- nvmf/common.sh@154 -- # true 00:12:21.297 21:19:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.297 Cannot find device "nvmf_tgt_br2" 00:12:21.297 21:19:44 -- nvmf/common.sh@155 -- # true 00:12:21.297 21:19:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:21.297 21:19:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:21.297 Cannot find device "nvmf_tgt_br" 00:12:21.297 21:19:44 -- nvmf/common.sh@157 -- # true 00:12:21.297 21:19:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:21.297 Cannot find device "nvmf_tgt_br2" 00:12:21.297 21:19:44 -- nvmf/common.sh@158 -- # true 00:12:21.297 21:19:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:21.297 21:19:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:21.297 21:19:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.297 21:19:44 -- nvmf/common.sh@161 -- # true 00:12:21.297 21:19:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.297 21:19:44 -- nvmf/common.sh@162 -- # true 00:12:21.297 21:19:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:21.297 21:19:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:21.297 21:19:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.297 21:19:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.297 21:19:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.297 21:19:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.555 21:19:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.555 21:19:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.555 21:19:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.555 21:19:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:21.555 21:19:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:21.555 21:19:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:21.555 21:19:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:21.555 21:19:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.555 21:19:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.555 21:19:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.555 21:19:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:21.555 21:19:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:21.555 21:19:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.555 21:19:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:21.555 21:19:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:21.555 21:19:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:21.555 21:19:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:21.555 21:19:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:21.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:21.555 00:12:21.555 --- 10.0.0.2 ping statistics --- 00:12:21.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.555 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:21.555 21:19:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:21.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:21.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:21.555 00:12:21.555 --- 10.0.0.3 ping statistics --- 00:12:21.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.555 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:21.555 21:19:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:21.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:21.555 00:12:21.555 --- 10.0.0.1 ping statistics --- 00:12:21.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.555 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:21.555 21:19:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.555 21:19:45 -- nvmf/common.sh@421 -- # return 0 00:12:21.555 21:19:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:21.555 21:19:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.555 21:19:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:21.555 21:19:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:21.555 21:19:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.555 21:19:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:21.555 21:19:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:21.555 21:19:45 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:21.555 21:19:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:21.555 21:19:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.555 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:21.555 21:19:45 -- nvmf/common.sh@469 -- # nvmfpid=77974 00:12:21.555 21:19:45 -- nvmf/common.sh@470 -- # waitforlisten 77974 00:12:21.555 21:19:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.555 21:19:45 -- common/autotest_common.sh@829 -- # '[' -z 77974 ']' 00:12:21.555 21:19:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.555 21:19:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.555 21:19:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.555 21:19:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.555 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:21.555 [2024-11-28 21:19:45.264101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:21.555 [2024-11-28 21:19:45.264246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.814 [2024-11-28 21:19:45.406517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.814 [2024-11-28 21:19:45.440009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:21.814 [2024-11-28 21:19:45.440182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.814 [2024-11-28 21:19:45.440194] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.814 [2024-11-28 21:19:45.440201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.814 [2024-11-28 21:19:45.440268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.814 [2024-11-28 21:19:45.440557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.814 [2024-11-28 21:19:45.440560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.814 [2024-11-28 21:19:45.441224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.814 21:19:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.814 21:19:45 -- common/autotest_common.sh@862 -- # return 0 00:12:21.814 21:19:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:21.814 21:19:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.814 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.073 21:19:45 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 [2024-11-28 21:19:45.574574] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.073 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 Malloc1 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 [2024-11-28 21:19:45.640984] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.073 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 Malloc2 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.073 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 Malloc3 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.073 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 Malloc4 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.073 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 Malloc5 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.073 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.073 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.073 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:22.073 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.073 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 Malloc6 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.333 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 Malloc7 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.333 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 Malloc8 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.333 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 Malloc9 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.333 21:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:22.333 21:19:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 Malloc10 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.333 21:19:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 Malloc11 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:22.333 21:19:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.333 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.333 21:19:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.333 21:19:46 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:22.333 21:19:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.333 21:19:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.592 21:19:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:22.592 21:19:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:22.592 21:19:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.592 21:19:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:22.592 21:19:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:24.519 21:19:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:24.520 21:19:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:24.520 21:19:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:24.520 21:19:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:24.520 21:19:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.520 21:19:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:24.520 21:19:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:24.520 21:19:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:24.778 21:19:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:24.778 21:19:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:24.778 21:19:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.778 21:19:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:24.778 21:19:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:26.683 21:19:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:26.683 21:19:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:26.683 21:19:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:26.683 21:19:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:26.683 21:19:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.683 21:19:50 -- common/autotest_common.sh@1197 -- # return 0 00:12:26.683 21:19:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:26.683 21:19:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:26.941 21:19:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:26.941 21:19:50 -- common/autotest_common.sh@1187 -- # local i=0 00:12:26.941 21:19:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.941 21:19:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:26.941 21:19:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:28.836 21:19:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:28.836 21:19:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:28.836 21:19:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:28.836 21:19:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:28.836 21:19:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.836 21:19:52 -- common/autotest_common.sh@1197 -- # return 0 00:12:28.836 21:19:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:28.836 21:19:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:29.094 21:19:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:29.094 21:19:52 -- common/autotest_common.sh@1187 -- # local i=0 00:12:29.094 21:19:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.094 21:19:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:29.094 21:19:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:30.995 21:19:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:30.995 21:19:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:30.995 21:19:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:30.995 21:19:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:30.995 21:19:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.995 21:19:54 -- common/autotest_common.sh@1197 -- # return 0 00:12:30.995 21:19:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:30.995 21:19:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:31.253 21:19:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:31.253 21:19:54 -- common/autotest_common.sh@1187 -- # local i=0 00:12:31.253 21:19:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.253 21:19:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:31.253 21:19:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:33.158 21:19:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:33.158 21:19:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:33.158 21:19:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:33.158 21:19:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:33.158 21:19:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.158 21:19:56 -- common/autotest_common.sh@1197 -- # return 0 00:12:33.158 21:19:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.158 21:19:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:33.417 21:19:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:33.417 21:19:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:33.417 21:19:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.417 21:19:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:33.417 21:19:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:35.322 21:19:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:35.322 21:19:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:35.322 21:19:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:35.322 21:19:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:35.322 21:19:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.322 21:19:59 -- common/autotest_common.sh@1197 -- # return 0 00:12:35.322 21:19:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.323 21:19:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:35.581 21:19:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:35.581 21:19:59 -- common/autotest_common.sh@1187 -- # local i=0 00:12:35.581 21:19:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.582 21:19:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:35.582 21:19:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:37.488 21:20:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:37.488 21:20:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:37.488 21:20:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:37.488 21:20:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:37.488 21:20:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.488 21:20:01 -- common/autotest_common.sh@1197 -- # return 0 00:12:37.488 21:20:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.488 21:20:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:37.748 21:20:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:37.748 21:20:01 -- common/autotest_common.sh@1187 -- # local i=0 00:12:37.748 21:20:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.748 21:20:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:37.748 21:20:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:39.654 21:20:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:39.654 21:20:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:39.654 21:20:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:39.654 21:20:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:39.654 21:20:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.654 21:20:03 -- common/autotest_common.sh@1197 -- # return 0 00:12:39.654 21:20:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:39.654 21:20:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:39.913 21:20:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:39.913 21:20:03 -- common/autotest_common.sh@1187 -- # local i=0 00:12:39.913 21:20:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.913 21:20:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:39.913 21:20:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:41.820 21:20:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:41.820 21:20:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:41.820 21:20:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:42.079 21:20:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:42.079 21:20:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.079 21:20:05 -- common/autotest_common.sh@1197 -- # return 0 00:12:42.079 21:20:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:42.079 21:20:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:42.079 21:20:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:42.079 21:20:05 -- common/autotest_common.sh@1187 -- # local i=0 00:12:42.079 21:20:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.079 21:20:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:42.079 21:20:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:43.996 21:20:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:43.996 21:20:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:43.996 21:20:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:44.267 21:20:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:44.267 21:20:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.267 21:20:07 -- common/autotest_common.sh@1197 -- # return 0 00:12:44.267 21:20:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:44.267 21:20:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:44.267 21:20:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:44.267 21:20:07 -- common/autotest_common.sh@1187 -- # local i=0 00:12:44.267 21:20:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.267 21:20:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:44.267 21:20:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:46.202 21:20:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:46.202 21:20:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:46.202 21:20:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:46.202 21:20:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:46.202 21:20:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.202 21:20:09 -- common/autotest_common.sh@1197 -- # return 0 00:12:46.202 21:20:09 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:46.202 [global] 00:12:46.202 thread=1 00:12:46.202 invalidate=1 00:12:46.202 rw=read 00:12:46.202 time_based=1 00:12:46.202 runtime=10 00:12:46.202 ioengine=libaio 00:12:46.202 direct=1 00:12:46.202 bs=262144 00:12:46.202 iodepth=64 00:12:46.202 norandommap=1 00:12:46.202 numjobs=1 00:12:46.202 00:12:46.461 [job0] 00:12:46.461 filename=/dev/nvme0n1 00:12:46.461 [job1] 00:12:46.461 filename=/dev/nvme10n1 00:12:46.461 [job2] 00:12:46.461 filename=/dev/nvme1n1 00:12:46.461 [job3] 00:12:46.461 filename=/dev/nvme2n1 00:12:46.461 [job4] 00:12:46.461 filename=/dev/nvme3n1 00:12:46.461 [job5] 00:12:46.461 filename=/dev/nvme4n1 00:12:46.461 [job6] 00:12:46.461 filename=/dev/nvme5n1 00:12:46.461 [job7] 00:12:46.461 filename=/dev/nvme6n1 00:12:46.461 [job8] 00:12:46.461 filename=/dev/nvme7n1 00:12:46.461 [job9] 00:12:46.461 filename=/dev/nvme8n1 00:12:46.461 [job10] 00:12:46.461 filename=/dev/nvme9n1 00:12:46.461 Could not set queue depth (nvme0n1) 00:12:46.461 Could not set queue depth (nvme10n1) 00:12:46.461 Could not set queue depth (nvme1n1) 00:12:46.461 Could not set queue depth (nvme2n1) 00:12:46.461 Could not set queue depth (nvme3n1) 00:12:46.461 Could not set queue depth (nvme4n1) 00:12:46.461 Could not set queue depth (nvme5n1) 00:12:46.461 Could not set queue depth (nvme6n1) 00:12:46.461 Could not set queue depth (nvme7n1) 00:12:46.461 Could not set queue depth (nvme8n1) 00:12:46.461 Could not set queue depth (nvme9n1) 00:12:46.720 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:46.720 fio-3.35 00:12:46.720 Starting 11 threads 00:12:58.931 00:12:58.931 job0: (groupid=0, jobs=1): err= 0: pid=78431: Thu Nov 28 21:20:20 2024 00:12:58.931 read: IOPS=588, BW=147MiB/s (154MB/s)(1487MiB/10108msec) 00:12:58.931 slat (usec): min=21, max=30074, avg=1676.82, stdev=3847.09 00:12:58.931 clat (msec): min=36, max=252, avg=106.98, stdev=29.43 00:12:58.931 lat (msec): min=36, max=252, avg=108.66, stdev=29.93 00:12:58.931 clat percentiles (msec): 00:12:58.931 | 1.00th=[ 55], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 75], 00:12:58.931 | 30.00th=[ 92], 40.00th=[ 105], 50.00th=[ 112], 60.00th=[ 116], 00:12:58.931 | 70.00th=[ 122], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 146], 00:12:58.931 | 99.00th=[ 159], 99.50th=[ 178], 99.90th=[ 228], 99.95th=[ 228], 00:12:58.931 | 99.99th=[ 253] 00:12:58.931 bw ( KiB/s): min=109568, max=254976, per=8.56%, avg=150630.40, stdev=42527.84, samples=20 00:12:58.931 iops : min= 428, max= 996, avg=588.40, stdev=166.12, samples=20 00:12:58.931 lat (msec) : 50=0.40%, 100=36.52%, 250=63.04%, 500=0.03% 00:12:58.931 cpu : usr=0.34%, sys=2.22%, ctx=1323, majf=0, minf=4097 00:12:58.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:58.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.931 issued rwts: total=5947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.931 job1: (groupid=0, jobs=1): err= 0: pid=78432: Thu Nov 28 21:20:20 2024 00:12:58.931 read: IOPS=657, BW=164MiB/s (172MB/s)(1654MiB/10058msec) 00:12:58.931 slat (usec): min=21, max=49958, avg=1506.67, stdev=3504.23 00:12:58.931 clat (msec): min=32, max=169, avg=95.67, stdev=14.51 00:12:58.931 lat (msec): min=32, max=174, avg=97.17, stdev=14.71 00:12:58.931 clat percentiles (msec): 00:12:58.931 | 1.00th=[ 71], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 86], 00:12:58.931 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:58.931 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 120], 95.00th=[ 126], 00:12:58.931 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 167], 00:12:58.931 | 99.99th=[ 171] 00:12:58.931 bw ( KiB/s): min=126976, max=187392, per=9.53%, avg=167705.60, stdev=19422.98, samples=20 00:12:58.931 iops : min= 496, max= 732, avg=655.10, stdev=75.87, samples=20 00:12:58.931 lat (msec) : 50=0.42%, 100=76.73%, 250=22.85% 00:12:58.931 cpu : usr=0.46%, sys=2.74%, ctx=1439, majf=0, minf=4097 00:12:58.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:58.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.931 issued rwts: total=6614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.931 job2: (groupid=0, jobs=1): err= 0: pid=78433: Thu Nov 28 21:20:20 2024 00:12:58.931 read: IOPS=517, BW=129MiB/s (136MB/s)(1309MiB/10112msec) 00:12:58.931 slat (usec): min=14, max=47745, avg=1889.84, stdev=4409.81 00:12:58.931 clat (msec): min=52, max=271, avg=121.56, stdev=19.61 00:12:58.931 lat (msec): min=52, max=271, avg=123.45, stdev=19.99 00:12:58.931 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 68], 5.00th=[ 89], 10.00th=[ 105], 20.00th=[ 111], 00:12:58.932 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 122], 00:12:58.932 | 70.00th=[ 129], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 150], 00:12:58.932 | 99.00th=[ 169], 99.50th=[ 197], 99.90th=[ 228], 99.95th=[ 232], 00:12:58.932 | 99.99th=[ 271] 00:12:58.932 bw ( KiB/s): min=109056, max=164864, per=7.52%, avg=132391.60, stdev=16609.23, samples=20 00:12:58.932 iops : min= 426, max= 644, avg=516.85, stdev=64.95, samples=20 00:12:58.932 lat (msec) : 100=8.25%, 250=91.73%, 500=0.02% 00:12:58.932 cpu : usr=0.27%, sys=2.22%, ctx=1262, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=5234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job3: (groupid=0, jobs=1): err= 0: pid=78434: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=838, BW=210MiB/s (220MB/s)(2118MiB/10103msec) 00:12:58.932 slat (usec): min=21, max=48949, avg=1153.91, stdev=3164.90 00:12:58.932 clat (msec): min=3, max=239, avg=75.05, stdev=48.40 00:12:58.932 lat (msec): min=3, max=245, avg=76.21, stdev=49.15 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:12:58.932 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 109], 00:12:58.932 | 70.00th=[ 116], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 146], 00:12:58.932 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 226], 99.95th=[ 234], 00:12:58.932 | 99.99th=[ 241] 00:12:58.932 bw ( KiB/s): min=109056, max=513024, per=12.23%, avg=215255.05, stdev=156041.51, samples=20 00:12:58.932 iops : min= 426, max= 2004, avg=840.80, stdev=609.55, samples=20 00:12:58.932 lat (msec) : 4=0.07%, 10=0.64%, 20=0.91%, 50=51.16%, 100=3.73% 00:12:58.932 lat (msec) : 250=43.50% 00:12:58.932 cpu : usr=0.21%, sys=3.46%, ctx=1900, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=8472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job4: (groupid=0, jobs=1): err= 0: pid=78435: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=606, BW=152MiB/s (159MB/s)(1524MiB/10046msec) 00:12:58.932 slat (usec): min=21, max=29430, avg=1636.03, stdev=3512.79 00:12:58.932 clat (msec): min=20, max=141, avg=103.70, stdev=15.20 00:12:58.932 lat (msec): min=20, max=142, avg=105.34, stdev=15.46 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 70], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 89], 00:12:58.932 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 109], 60.00th=[ 112], 00:12:58.932 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 122], 95.00th=[ 125], 00:12:58.932 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 138], 00:12:58.932 | 99.99th=[ 142] 00:12:58.932 bw ( KiB/s): min=132854, max=185856, per=8.77%, avg=154406.75, stdev=18568.35, samples=20 00:12:58.932 iops : min= 518, max= 726, avg=602.90, stdev=72.61, samples=20 00:12:58.932 lat (msec) : 50=0.26%, 100=42.86%, 250=56.88% 00:12:58.932 cpu : usr=0.38%, sys=2.73%, ctx=1400, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=6094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job5: (groupid=0, jobs=1): err= 0: pid=78436: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=581, BW=145MiB/s (152MB/s)(1470MiB/10110msec) 00:12:58.932 slat (usec): min=21, max=31611, avg=1697.91, stdev=3893.17 00:12:58.932 clat (msec): min=17, max=246, avg=108.15, stdev=30.33 00:12:58.932 lat (msec): min=18, max=246, avg=109.85, stdev=30.79 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 54], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 72], 00:12:58.932 | 30.00th=[ 95], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 118], 00:12:58.932 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 148], 00:12:58.932 | 99.00th=[ 161], 99.50th=[ 186], 99.90th=[ 243], 99.95th=[ 247], 00:12:58.932 | 99.99th=[ 247] 00:12:58.932 bw ( KiB/s): min=111104, max=265197, per=8.46%, avg=148888.90, stdev=42852.33, samples=20 00:12:58.932 iops : min= 434, max= 1035, avg=581.35, stdev=167.31, samples=20 00:12:58.932 lat (msec) : 20=0.10%, 50=0.39%, 100=32.97%, 250=66.54% 00:12:58.932 cpu : usr=0.16%, sys=2.20%, ctx=1324, majf=0, minf=4098 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=5881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job6: (groupid=0, jobs=1): err= 0: pid=78437: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=599, BW=150MiB/s (157MB/s)(1506MiB/10047msec) 00:12:58.932 slat (usec): min=22, max=48943, avg=1656.10, stdev=3658.72 00:12:58.932 clat (msec): min=24, max=143, avg=104.93, stdev=15.10 00:12:58.932 lat (msec): min=25, max=151, avg=106.59, stdev=15.33 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 71], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 91], 00:12:58.932 | 30.00th=[ 94], 40.00th=[ 102], 50.00th=[ 111], 60.00th=[ 113], 00:12:58.932 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 125], 00:12:58.932 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 140], 00:12:58.932 | 99.99th=[ 144] 00:12:58.932 bw ( KiB/s): min=131072, max=180736, per=8.67%, avg=152545.95, stdev=19545.40, samples=20 00:12:58.932 iops : min= 512, max= 706, avg=595.65, stdev=76.40, samples=20 00:12:58.932 lat (msec) : 50=0.33%, 100=38.86%, 250=60.81% 00:12:58.932 cpu : usr=0.41%, sys=2.69%, ctx=1365, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=6022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job7: (groupid=0, jobs=1): err= 0: pid=78438: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=656, BW=164MiB/s (172MB/s)(1651MiB/10058msec) 00:12:58.932 slat (usec): min=19, max=47448, avg=1509.56, stdev=3409.87 00:12:58.932 clat (msec): min=17, max=156, avg=95.82, stdev=13.93 00:12:58.932 lat (msec): min=18, max=168, avg=97.33, stdev=14.09 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 73], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 87], 00:12:58.932 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:58.932 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 125], 00:12:58.932 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 153], 00:12:58.932 | 99.99th=[ 157] 00:12:58.932 bw ( KiB/s): min=123126, max=184832, per=9.51%, avg=167505.75, stdev=19385.43, samples=20 00:12:58.932 iops : min= 480, max= 722, avg=654.05, stdev=75.95, samples=20 00:12:58.932 lat (msec) : 20=0.05%, 50=0.33%, 100=76.17%, 250=23.45% 00:12:58.932 cpu : usr=0.34%, sys=2.69%, ctx=1471, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=6605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job8: (groupid=0, jobs=1): err= 0: pid=78439: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=740, BW=185MiB/s (194MB/s)(1863MiB/10062msec) 00:12:58.932 slat (usec): min=21, max=91723, avg=1335.48, stdev=3394.12 00:12:58.932 clat (msec): min=15, max=224, avg=84.96, stdev=26.57 00:12:58.932 lat (msec): min=16, max=224, avg=86.30, stdev=26.97 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 81], 00:12:58.932 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 93], 00:12:58.932 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 124], 00:12:58.932 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:12:58.932 | 99.99th=[ 224] 00:12:58.932 bw ( KiB/s): min=129788, max=475136, per=10.75%, avg=189182.45, stdev=69627.81, samples=20 00:12:58.932 iops : min= 506, max= 1856, avg=738.75, stdev=272.09, samples=20 00:12:58.932 lat (msec) : 20=0.34%, 50=15.81%, 100=66.67%, 250=17.19% 00:12:58.932 cpu : usr=0.28%, sys=2.61%, ctx=1593, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:58.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.932 issued rwts: total=7452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.932 job9: (groupid=0, jobs=1): err= 0: pid=78440: Thu Nov 28 21:20:20 2024 00:12:58.932 read: IOPS=521, BW=130MiB/s (137MB/s)(1319MiB/10110msec) 00:12:58.932 slat (usec): min=21, max=40454, avg=1871.01, stdev=4266.75 00:12:58.932 clat (msec): min=8, max=258, avg=120.58, stdev=21.85 00:12:58.932 lat (msec): min=10, max=258, avg=122.45, stdev=22.29 00:12:58.932 clat percentiles (msec): 00:12:58.932 | 1.00th=[ 48], 5.00th=[ 88], 10.00th=[ 95], 20.00th=[ 110], 00:12:58.932 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 123], 00:12:58.932 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 150], 00:12:58.932 | 99.00th=[ 161], 99.50th=[ 188], 99.90th=[ 243], 99.95th=[ 247], 00:12:58.932 | 99.99th=[ 259] 00:12:58.932 bw ( KiB/s): min=110592, max=168622, per=7.58%, avg=133520.25, stdev=18498.76, samples=20 00:12:58.932 iops : min= 432, max= 658, avg=521.30, stdev=72.25, samples=20 00:12:58.932 lat (msec) : 10=0.02%, 20=0.17%, 50=0.87%, 100=11.56%, 250=87.36% 00:12:58.932 lat (msec) : 500=0.02% 00:12:58.932 cpu : usr=0.20%, sys=1.85%, ctx=1230, majf=0, minf=4097 00:12:58.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:58.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.933 issued rwts: total=5277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.933 job10: (groupid=0, jobs=1): err= 0: pid=78441: Thu Nov 28 21:20:20 2024 00:12:58.933 read: IOPS=591, BW=148MiB/s (155MB/s)(1485MiB/10042msec) 00:12:58.933 slat (usec): min=18, max=54017, avg=1636.74, stdev=3675.15 00:12:58.933 clat (msec): min=36, max=153, avg=106.43, stdev=15.81 00:12:58.933 lat (msec): min=39, max=153, avg=108.06, stdev=16.07 00:12:58.933 clat percentiles (msec): 00:12:58.933 | 1.00th=[ 73], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 91], 00:12:58.933 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 111], 60.00th=[ 114], 00:12:58.933 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 129], 00:12:58.933 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:12:58.933 | 99.99th=[ 155] 00:12:58.933 bw ( KiB/s): min=126204, max=181760, per=8.55%, avg=150463.80, stdev=20002.06, samples=20 00:12:58.933 iops : min= 492, max= 710, avg=587.70, stdev=78.20, samples=20 00:12:58.933 lat (msec) : 50=0.51%, 100=36.78%, 250=62.71% 00:12:58.933 cpu : usr=0.34%, sys=2.23%, ctx=1362, majf=0, minf=4097 00:12:58.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:58.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.933 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.933 00:12:58.933 Run status group 0 (all jobs): 00:12:58.933 READ: bw=1719MiB/s (1803MB/s), 129MiB/s-210MiB/s (136MB/s-220MB/s), io=17.0GiB (18.2GB), run=10042-10112msec 00:12:58.933 00:12:58.933 Disk stats (read/write): 00:12:58.933 nvme0n1: ios=11771/0, merge=0/0, ticks=1228083/0, in_queue=1228083, util=97.67% 00:12:58.933 nvme10n1: ios=13100/0, merge=0/0, ticks=1234918/0, in_queue=1234918, util=97.79% 00:12:58.933 nvme1n1: ios=10344/0, merge=0/0, ticks=1226375/0, in_queue=1226375, util=98.06% 00:12:58.933 nvme2n1: ios=16821/0, merge=0/0, ticks=1229160/0, in_queue=1229160, util=98.16% 00:12:58.933 nvme3n1: ios=12064/0, merge=0/0, ticks=1233026/0, in_queue=1233026, util=98.24% 00:12:58.933 nvme4n1: ios=11640/0, merge=0/0, ticks=1227611/0, in_queue=1227611, util=98.49% 00:12:58.933 nvme5n1: ios=11920/0, merge=0/0, ticks=1232363/0, in_queue=1232363, util=98.58% 00:12:58.933 nvme6n1: ios=13085/0, merge=0/0, ticks=1232449/0, in_queue=1232449, util=98.65% 00:12:58.933 nvme7n1: ios=14780/0, merge=0/0, ticks=1233823/0, in_queue=1233823, util=98.98% 00:12:58.933 nvme8n1: ios=10434/0, merge=0/0, ticks=1228780/0, in_queue=1228780, util=99.03% 00:12:58.933 nvme9n1: ios=11749/0, merge=0/0, ticks=1233781/0, in_queue=1233781, util=99.03% 00:12:58.933 21:20:20 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:58.933 [global] 00:12:58.933 thread=1 00:12:58.933 invalidate=1 00:12:58.933 rw=randwrite 00:12:58.933 time_based=1 00:12:58.933 runtime=10 00:12:58.933 ioengine=libaio 00:12:58.933 direct=1 00:12:58.933 bs=262144 00:12:58.933 iodepth=64 00:12:58.933 norandommap=1 00:12:58.933 numjobs=1 00:12:58.933 00:12:58.933 [job0] 00:12:58.933 filename=/dev/nvme0n1 00:12:58.933 [job1] 00:12:58.933 filename=/dev/nvme10n1 00:12:58.933 [job2] 00:12:58.933 filename=/dev/nvme1n1 00:12:58.933 [job3] 00:12:58.933 filename=/dev/nvme2n1 00:12:58.933 [job4] 00:12:58.933 filename=/dev/nvme3n1 00:12:58.933 [job5] 00:12:58.933 filename=/dev/nvme4n1 00:12:58.933 [job6] 00:12:58.933 filename=/dev/nvme5n1 00:12:58.933 [job7] 00:12:58.933 filename=/dev/nvme6n1 00:12:58.933 [job8] 00:12:58.933 filename=/dev/nvme7n1 00:12:58.933 [job9] 00:12:58.933 filename=/dev/nvme8n1 00:12:58.933 [job10] 00:12:58.933 filename=/dev/nvme9n1 00:12:58.933 Could not set queue depth (nvme0n1) 00:12:58.933 Could not set queue depth (nvme10n1) 00:12:58.933 Could not set queue depth (nvme1n1) 00:12:58.933 Could not set queue depth (nvme2n1) 00:12:58.933 Could not set queue depth (nvme3n1) 00:12:58.933 Could not set queue depth (nvme4n1) 00:12:58.933 Could not set queue depth (nvme5n1) 00:12:58.933 Could not set queue depth (nvme6n1) 00:12:58.933 Could not set queue depth (nvme7n1) 00:12:58.933 Could not set queue depth (nvme8n1) 00:12:58.933 Could not set queue depth (nvme9n1) 00:12:58.933 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:58.933 fio-3.35 00:12:58.933 Starting 11 threads 00:13:08.916 00:13:08.916 job0: (groupid=0, jobs=1): err= 0: pid=78636: Thu Nov 28 21:20:31 2024 00:13:08.916 write: IOPS=254, BW=63.6MiB/s (66.7MB/s)(650MiB/10223msec); 0 zone resets 00:13:08.916 slat (usec): min=19, max=103867, avg=3737.34, stdev=7017.68 00:13:08.916 clat (msec): min=106, max=470, avg=247.77, stdev=30.27 00:13:08.916 lat (msec): min=106, max=471, avg=251.51, stdev=30.15 00:13:08.916 clat percentiles (msec): 00:13:08.916 | 1.00th=[ 133], 5.00th=[ 213], 10.00th=[ 230], 20.00th=[ 239], 00:13:08.916 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:13:08.916 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 275], 00:13:08.916 | 99.00th=[ 359], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 472], 00:13:08.916 | 99.99th=[ 472] 00:13:08.916 bw ( KiB/s): min=49152, max=81920, per=4.76%, avg=64915.05, stdev=5719.09, samples=20 00:13:08.916 iops : min= 192, max= 320, avg=253.55, stdev=22.34, samples=20 00:13:08.916 lat (msec) : 250=42.69%, 500=57.31% 00:13:08.916 cpu : usr=0.43%, sys=0.65%, ctx=3246, majf=0, minf=1 00:13:08.916 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:13:08.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.916 issued rwts: total=0,2600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.916 job1: (groupid=0, jobs=1): err= 0: pid=78637: Thu Nov 28 21:20:31 2024 00:13:08.916 write: IOPS=248, BW=62.1MiB/s (65.2MB/s)(635MiB/10219msec); 0 zone resets 00:13:08.916 slat (usec): min=19, max=69400, avg=3931.43, stdev=7149.98 00:13:08.916 clat (msec): min=40, max=474, avg=253.43, stdev=31.35 00:13:08.916 lat (msec): min=40, max=474, avg=257.36, stdev=30.97 00:13:08.916 clat percentiles (msec): 00:13:08.916 | 1.00th=[ 106], 5.00th=[ 232], 10.00th=[ 236], 20.00th=[ 243], 00:13:08.916 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 257], 00:13:08.916 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 279], 00:13:08.916 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 477], 00:13:08.916 | 99.99th=[ 477] 00:13:08.916 bw ( KiB/s): min=59392, max=65536, per=4.65%, avg=63411.20, stdev=2129.09, samples=20 00:13:08.916 iops : min= 232, max= 256, avg=247.70, stdev= 8.32, samples=20 00:13:08.916 lat (msec) : 50=0.31%, 100=0.63%, 250=34.25%, 500=64.80% 00:13:08.916 cpu : usr=0.42%, sys=0.86%, ctx=1608, majf=0, minf=1 00:13:08.916 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:13:08.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.916 issued rwts: total=0,2540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.916 job2: (groupid=0, jobs=1): err= 0: pid=78649: Thu Nov 28 21:20:31 2024 00:13:08.916 write: IOPS=698, BW=175MiB/s (183MB/s)(1759MiB/10077msec); 0 zone resets 00:13:08.916 slat (usec): min=16, max=15461, avg=1405.27, stdev=2391.34 00:13:08.916 clat (msec): min=17, max=163, avg=90.22, stdev= 6.66 00:13:08.916 lat (msec): min=17, max=163, avg=91.62, stdev= 6.31 00:13:08.916 clat percentiles (msec): 00:13:08.916 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 86], 20.00th=[ 87], 00:13:08.916 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 91], 60.00th=[ 92], 00:13:08.916 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 93], 95.00th=[ 94], 00:13:08.916 | 99.00th=[ 106], 99.50th=[ 120], 99.90th=[ 153], 99.95th=[ 159], 00:13:08.916 | 99.99th=[ 165] 00:13:08.916 bw ( KiB/s): min=172544, max=182784, per=13.10%, avg=178534.40, stdev=2308.34, samples=20 00:13:08.916 iops : min= 674, max= 714, avg=697.40, stdev= 9.02, samples=20 00:13:08.916 lat (msec) : 20=0.06%, 50=0.45%, 100=98.27%, 250=1.22% 00:13:08.916 cpu : usr=0.93%, sys=1.65%, ctx=10872, majf=0, minf=1 00:13:08.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:08.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.916 issued rwts: total=0,7037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.916 job3: (groupid=0, jobs=1): err= 0: pid=78650: Thu Nov 28 21:20:31 2024 00:13:08.916 write: IOPS=1121, BW=280MiB/s (294MB/s)(2819MiB/10051msec); 0 zone resets 00:13:08.916 slat (usec): min=17, max=6465, avg=868.49, stdev=1472.32 00:13:08.916 clat (msec): min=2, max=108, avg=56.17, stdev= 5.61 00:13:08.916 lat (msec): min=2, max=108, avg=57.04, stdev= 5.56 00:13:08.916 clat percentiles (msec): 00:13:08.916 | 1.00th=[ 33], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:13:08.916 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 57], 00:13:08.917 | 70.00th=[ 58], 80.00th=[ 58], 90.00th=[ 59], 95.00th=[ 60], 00:13:08.917 | 99.00th=[ 70], 99.50th=[ 88], 99.90th=[ 102], 99.95th=[ 103], 00:13:08.917 | 99.99th=[ 106] 00:13:08.917 bw ( KiB/s): min=281600, max=299607, per=21.06%, avg=287057.15, stdev=3981.30, samples=20 00:13:08.917 iops : min= 1100, max= 1170, avg=1121.30, stdev=15.50, samples=20 00:13:08.917 lat (msec) : 4=0.08%, 10=0.13%, 20=0.29%, 50=1.07%, 100=98.30% 00:13:08.917 lat (msec) : 250=0.12% 00:13:08.917 cpu : usr=1.61%, sys=2.56%, ctx=13241, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,11275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job4: (groupid=0, jobs=1): err= 0: pid=78651: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=255, BW=63.9MiB/s (67.0MB/s)(653MiB/10221msec); 0 zone resets 00:13:08.917 slat (usec): min=21, max=46531, avg=3785.46, stdev=6836.02 00:13:08.917 clat (msec): min=10, max=473, avg=246.42, stdev=38.08 00:13:08.917 lat (msec): min=10, max=473, avg=250.20, stdev=38.11 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 50], 5.00th=[ 220], 10.00th=[ 234], 20.00th=[ 241], 00:13:08.917 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:13:08.917 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 268], 00:13:08.917 | 99.00th=[ 359], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 472], 00:13:08.917 | 99.99th=[ 472] 00:13:08.917 bw ( KiB/s): min=61952, max=79872, per=4.79%, avg=65253.95, stdev=3840.39, samples=20 00:13:08.917 iops : min= 242, max= 312, avg=254.85, stdev=15.01, samples=20 00:13:08.917 lat (msec) : 20=0.19%, 50=0.84%, 100=1.30%, 250=39.49%, 500=58.17% 00:13:08.917 cpu : usr=0.51%, sys=0.78%, ctx=2983, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,2613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job5: (groupid=0, jobs=1): err= 0: pid=78652: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=1116, BW=279MiB/s (293MB/s)(2808MiB/10055msec); 0 zone resets 00:13:08.917 slat (usec): min=17, max=19048, avg=885.62, stdev=1495.08 00:13:08.917 clat (msec): min=8, max=109, avg=56.40, stdev= 4.12 00:13:08.917 lat (msec): min=8, max=109, avg=57.29, stdev= 4.06 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:13:08.917 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 57], 00:13:08.917 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 59], 95.00th=[ 60], 00:13:08.917 | 99.00th=[ 71], 99.50th=[ 84], 99.90th=[ 100], 99.95th=[ 104], 00:13:08.917 | 99.99th=[ 107] 00:13:08.917 bw ( KiB/s): min=267776, max=296448, per=20.97%, avg=285875.20, stdev=6515.16, samples=20 00:13:08.917 iops : min= 1046, max= 1158, avg=1116.70, stdev=25.45, samples=20 00:13:08.917 lat (msec) : 10=0.03%, 20=0.07%, 50=0.35%, 100=99.47%, 250=0.08% 00:13:08.917 cpu : usr=1.78%, sys=2.74%, ctx=13590, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,11230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job6: (groupid=0, jobs=1): err= 0: pid=78653: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=253, BW=63.5MiB/s (66.6MB/s)(649MiB/10222msec); 0 zone resets 00:13:08.917 slat (usec): min=18, max=58994, avg=3824.46, stdev=6969.00 00:13:08.917 clat (msec): min=30, max=470, avg=248.04, stdev=37.91 00:13:08.917 lat (msec): min=30, max=470, avg=251.87, stdev=37.88 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 59], 5.00th=[ 222], 10.00th=[ 236], 20.00th=[ 241], 00:13:08.917 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 255], 00:13:08.917 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 264], 95.00th=[ 275], 00:13:08.917 | 99.00th=[ 376], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 472], 00:13:08.917 | 99.99th=[ 472] 00:13:08.917 bw ( KiB/s): min=59392, max=77466, per=4.76%, avg=64820.35, stdev=3579.99, samples=20 00:13:08.917 iops : min= 232, max= 302, avg=253.15, stdev=13.87, samples=20 00:13:08.917 lat (msec) : 50=0.73%, 100=1.85%, 250=33.17%, 500=64.25% 00:13:08.917 cpu : usr=0.44%, sys=0.68%, ctx=3202, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,2596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job7: (groupid=0, jobs=1): err= 0: pid=78654: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=245, BW=61.4MiB/s (64.4MB/s)(628MiB/10220msec); 0 zone resets 00:13:08.917 slat (usec): min=17, max=65382, avg=3976.69, stdev=7294.40 00:13:08.917 clat (msec): min=67, max=471, avg=256.28, stdev=28.32 00:13:08.917 lat (msec): min=67, max=471, avg=260.26, stdev=27.76 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 150], 5.00th=[ 234], 10.00th=[ 239], 20.00th=[ 243], 00:13:08.917 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 257], 00:13:08.917 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 284], 00:13:08.917 | 99.00th=[ 376], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 472], 00:13:08.917 | 99.99th=[ 472] 00:13:08.917 bw ( KiB/s): min=55296, max=65536, per=4.60%, avg=62688.05, stdev=2915.87, samples=20 00:13:08.917 iops : min= 216, max= 256, avg=244.85, stdev=11.38, samples=20 00:13:08.917 lat (msec) : 100=0.48%, 250=28.58%, 500=70.94% 00:13:08.917 cpu : usr=0.58%, sys=0.68%, ctx=1664, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,2512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job8: (groupid=0, jobs=1): err= 0: pid=78655: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=247, BW=61.9MiB/s (64.9MB/s)(633MiB/10227msec); 0 zone resets 00:13:08.917 slat (usec): min=18, max=69340, avg=3947.55, stdev=7222.16 00:13:08.917 clat (msec): min=25, max=473, avg=254.41, stdev=33.36 00:13:08.917 lat (msec): min=25, max=474, avg=258.36, stdev=33.06 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 83], 5.00th=[ 232], 10.00th=[ 239], 20.00th=[ 245], 00:13:08.917 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 257], 60.00th=[ 257], 00:13:08.917 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 279], 00:13:08.917 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 472], 00:13:08.917 | 99.99th=[ 477] 00:13:08.917 bw ( KiB/s): min=57344, max=67584, per=4.64%, avg=63187.20, stdev=2128.47, samples=20 00:13:08.917 iops : min= 224, max= 264, avg=246.80, stdev= 8.29, samples=20 00:13:08.917 lat (msec) : 50=0.47%, 100=0.79%, 250=24.72%, 500=74.01% 00:13:08.917 cpu : usr=0.42%, sys=0.81%, ctx=3396, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,2532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job9: (groupid=0, jobs=1): err= 0: pid=78656: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=695, BW=174MiB/s (182MB/s)(1754MiB/10091msec); 0 zone resets 00:13:08.917 slat (usec): min=16, max=20357, avg=1383.66, stdev=2381.97 00:13:08.917 clat (msec): min=11, max=285, avg=90.63, stdev= 9.97 00:13:08.917 lat (msec): min=11, max=285, avg=92.01, stdev= 9.76 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 74], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:13:08.917 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 91], 60.00th=[ 92], 00:13:08.917 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 95], 00:13:08.917 | 99.00th=[ 128], 99.50th=[ 150], 99.90th=[ 262], 99.95th=[ 275], 00:13:08.917 | 99.99th=[ 288] 00:13:08.917 bw ( KiB/s): min=161792, max=182272, per=13.06%, avg=178022.40, stdev=4202.74, samples=20 00:13:08.917 iops : min= 632, max= 712, avg=695.40, stdev=16.42, samples=20 00:13:08.917 lat (msec) : 20=0.11%, 50=0.16%, 100=98.32%, 250=1.30%, 500=0.11% 00:13:08.917 cpu : usr=1.24%, sys=1.95%, ctx=8362, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,7017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.917 job10: (groupid=0, jobs=1): err= 0: pid=78657: Thu Nov 28 21:20:31 2024 00:13:08.917 write: IOPS=248, BW=62.1MiB/s (65.1MB/s)(635MiB/10234msec); 0 zone resets 00:13:08.917 slat (usec): min=17, max=55049, avg=3936.56, stdev=7080.20 00:13:08.917 clat (msec): min=25, max=474, avg=253.69, stdev=33.57 00:13:08.917 lat (msec): min=25, max=474, avg=257.63, stdev=33.31 00:13:08.917 clat percentiles (msec): 00:13:08.917 | 1.00th=[ 83], 5.00th=[ 230], 10.00th=[ 236], 20.00th=[ 243], 00:13:08.917 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:13:08.917 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 275], 00:13:08.917 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 477], 00:13:08.917 | 99.99th=[ 477] 00:13:08.917 bw ( KiB/s): min=59392, max=67584, per=4.65%, avg=63398.55, stdev=2531.91, samples=20 00:13:08.917 iops : min= 232, max= 264, avg=247.65, stdev= 9.89, samples=20 00:13:08.917 lat (msec) : 50=0.47%, 100=0.79%, 250=29.91%, 500=68.83% 00:13:08.917 cpu : usr=0.44%, sys=0.64%, ctx=3156, majf=0, minf=1 00:13:08.917 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:08.917 issued rwts: total=0,2541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.918 00:13:08.918 Run status group 0 (all jobs): 00:13:08.918 WRITE: bw=1331MiB/s (1396MB/s), 61.4MiB/s-280MiB/s (64.4MB/s-294MB/s), io=13.3GiB (14.3GB), run=10051-10234msec 00:13:08.918 00:13:08.918 Disk stats (read/write): 00:13:08.918 nvme0n1: ios=49/5171, merge=0/0, ticks=31/1233038, in_queue=1233069, util=97.39% 00:13:08.918 nvme10n1: ios=49/5063, merge=0/0, ticks=56/1232772, in_queue=1232828, util=97.83% 00:13:08.918 nvme1n1: ios=29/13835, merge=0/0, ticks=24/1208200, in_queue=1208224, util=97.59% 00:13:08.918 nvme2n1: ios=0/22299, merge=0/0, ticks=0/1212295, in_queue=1212295, util=97.86% 00:13:08.918 nvme3n1: ios=0/5201, merge=0/0, ticks=0/1232179, in_queue=1232179, util=97.88% 00:13:08.918 nvme4n1: ios=0/22203, merge=0/0, ticks=0/1210698, in_queue=1210698, util=98.21% 00:13:08.918 nvme5n1: ios=0/5163, merge=0/0, ticks=0/1231344, in_queue=1231344, util=98.20% 00:13:08.918 nvme6n1: ios=0/4997, merge=0/0, ticks=0/1231003, in_queue=1231003, util=98.31% 00:13:08.918 nvme7n1: ios=0/5045, merge=0/0, ticks=0/1232878, in_queue=1232878, util=98.75% 00:13:08.918 nvme8n1: ios=0/13854, merge=0/0, ticks=0/1213543, in_queue=1213543, util=98.99% 00:13:08.918 nvme9n1: ios=0/5059, merge=0/0, ticks=0/1233277, in_queue=1233277, util=98.96% 00:13:08.918 21:20:31 -- target/multiconnection.sh@36 -- # sync 00:13:08.918 21:20:31 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:08.918 21:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.918 21:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:08.918 21:20:31 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:13:08.918 21:20:31 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.918 21:20:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:08.918 21:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:08.918 21:20:31 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:13:08.918 21:20:31 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:08.918 21:20:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:08.918 21:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:08.918 21:20:31 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:13:08.918 21:20:31 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:08.918 21:20:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:08.918 21:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:08.918 21:20:31 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:13:08.918 21:20:31 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:08.918 21:20:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:08.918 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:08.918 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:13:08.918 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:08.918 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:08.918 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:08.918 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:13:08.918 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:08.918 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:08.918 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:08.918 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:13:08.918 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:08.918 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:08.918 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:08.918 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:13:08.918 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.918 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:08.918 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.918 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:08.918 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.918 21:20:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.918 21:20:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:08.918 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:08.918 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.918 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:13:08.919 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.919 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:13:08.919 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.919 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:08.919 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.919 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:08.919 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.919 21:20:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.919 21:20:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:08.919 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:08.919 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:08.919 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.919 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.919 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:13:08.919 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.919 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:13:08.919 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.919 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:08.919 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.919 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:08.919 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.919 21:20:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.919 21:20:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:08.919 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:08.919 21:20:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:08.919 21:20:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.919 21:20:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.919 21:20:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:13:09.178 21:20:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:09.178 21:20:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:13:09.178 21:20:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:09.178 21:20:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:09.178 21:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.178 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:13:09.178 21:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.178 21:20:32 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:09.178 21:20:32 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:09.178 21:20:32 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:09.178 21:20:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:09.178 21:20:32 -- nvmf/common.sh@116 -- # sync 00:13:09.178 21:20:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:09.178 21:20:32 -- nvmf/common.sh@119 -- # set +e 00:13:09.178 21:20:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:09.178 21:20:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:09.178 rmmod nvme_tcp 00:13:09.178 rmmod nvme_fabrics 00:13:09.178 rmmod nvme_keyring 00:13:09.178 21:20:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:09.178 21:20:32 -- nvmf/common.sh@123 -- # set -e 00:13:09.178 21:20:32 -- nvmf/common.sh@124 -- # return 0 00:13:09.178 21:20:32 -- nvmf/common.sh@477 -- # '[' -n 77974 ']' 00:13:09.178 21:20:32 -- nvmf/common.sh@478 -- # killprocess 77974 00:13:09.178 21:20:32 -- common/autotest_common.sh@936 -- # '[' -z 77974 ']' 00:13:09.178 21:20:32 -- common/autotest_common.sh@940 -- # kill -0 77974 00:13:09.178 21:20:32 -- common/autotest_common.sh@941 -- # uname 00:13:09.178 21:20:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.178 21:20:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77974 00:13:09.178 killing process with pid 77974 00:13:09.178 21:20:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:09.178 21:20:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:09.178 21:20:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77974' 00:13:09.178 21:20:32 -- common/autotest_common.sh@955 -- # kill 77974 00:13:09.178 21:20:32 -- common/autotest_common.sh@960 -- # wait 77974 00:13:09.437 21:20:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:09.437 21:20:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:09.437 21:20:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:09.437 21:20:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.437 21:20:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:09.437 21:20:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.437 21:20:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.437 21:20:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.437 21:20:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:09.437 ************************************ 00:13:09.437 END TEST nvmf_multiconnection 00:13:09.437 ************************************ 00:13:09.437 00:13:09.437 real 0m48.466s 00:13:09.437 user 2m36.083s 00:13:09.437 sys 0m36.697s 00:13:09.437 21:20:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:09.437 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:13:09.438 21:20:33 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:09.438 21:20:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:09.438 21:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.438 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:13:09.438 ************************************ 00:13:09.438 START TEST nvmf_initiator_timeout 00:13:09.438 ************************************ 00:13:09.438 21:20:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:09.697 * Looking for test storage... 00:13:09.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:09.697 21:20:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:09.697 21:20:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:09.697 21:20:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:09.697 21:20:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:09.697 21:20:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:09.697 21:20:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:09.697 21:20:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:09.697 21:20:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:09.697 21:20:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:09.697 21:20:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.697 21:20:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:09.697 21:20:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:09.697 21:20:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:09.697 21:20:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:09.697 21:20:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:09.697 21:20:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:09.697 21:20:33 -- scripts/common.sh@344 -- # : 1 00:13:09.697 21:20:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:09.697 21:20:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.697 21:20:33 -- scripts/common.sh@364 -- # decimal 1 00:13:09.697 21:20:33 -- scripts/common.sh@352 -- # local d=1 00:13:09.697 21:20:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.697 21:20:33 -- scripts/common.sh@354 -- # echo 1 00:13:09.697 21:20:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:09.697 21:20:33 -- scripts/common.sh@365 -- # decimal 2 00:13:09.697 21:20:33 -- scripts/common.sh@352 -- # local d=2 00:13:09.697 21:20:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.697 21:20:33 -- scripts/common.sh@354 -- # echo 2 00:13:09.697 21:20:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:09.697 21:20:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:09.697 21:20:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:09.697 21:20:33 -- scripts/common.sh@367 -- # return 0 00:13:09.697 21:20:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.697 21:20:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.698 --rc genhtml_branch_coverage=1 00:13:09.698 --rc genhtml_function_coverage=1 00:13:09.698 --rc genhtml_legend=1 00:13:09.698 --rc geninfo_all_blocks=1 00:13:09.698 --rc geninfo_unexecuted_blocks=1 00:13:09.698 00:13:09.698 ' 00:13:09.698 21:20:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.698 --rc genhtml_branch_coverage=1 00:13:09.698 --rc genhtml_function_coverage=1 00:13:09.698 --rc genhtml_legend=1 00:13:09.698 --rc geninfo_all_blocks=1 00:13:09.698 --rc geninfo_unexecuted_blocks=1 00:13:09.698 00:13:09.698 ' 00:13:09.698 21:20:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.698 --rc genhtml_branch_coverage=1 00:13:09.698 --rc genhtml_function_coverage=1 00:13:09.698 --rc genhtml_legend=1 00:13:09.698 --rc geninfo_all_blocks=1 00:13:09.698 --rc geninfo_unexecuted_blocks=1 00:13:09.698 00:13:09.698 ' 00:13:09.698 21:20:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.698 --rc genhtml_branch_coverage=1 00:13:09.698 --rc genhtml_function_coverage=1 00:13:09.698 --rc genhtml_legend=1 00:13:09.698 --rc geninfo_all_blocks=1 00:13:09.698 --rc geninfo_unexecuted_blocks=1 00:13:09.698 00:13:09.698 ' 00:13:09.698 21:20:33 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:09.698 21:20:33 -- nvmf/common.sh@7 -- # uname -s 00:13:09.698 21:20:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.698 21:20:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.698 21:20:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.698 21:20:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.698 21:20:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.698 21:20:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.698 21:20:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.698 21:20:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.698 21:20:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.698 21:20:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.698 21:20:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:13:09.698 21:20:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:13:09.698 21:20:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.698 21:20:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.698 21:20:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:09.698 21:20:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.698 21:20:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.698 21:20:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.698 21:20:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.698 21:20:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 21:20:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 21:20:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 21:20:33 -- paths/export.sh@5 -- # export PATH 00:13:09.698 21:20:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 21:20:33 -- nvmf/common.sh@46 -- # : 0 00:13:09.698 21:20:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.698 21:20:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.698 21:20:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.698 21:20:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.698 21:20:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.698 21:20:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.698 21:20:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.698 21:20:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.698 21:20:33 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.698 21:20:33 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.698 21:20:33 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:09.698 21:20:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.698 21:20:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.698 21:20:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.698 21:20:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.698 21:20:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.698 21:20:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.698 21:20:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.698 21:20:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.698 21:20:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:09.698 21:20:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:09.698 21:20:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:09.698 21:20:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:09.698 21:20:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:09.698 21:20:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:09.698 21:20:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.698 21:20:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.698 21:20:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:09.698 21:20:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:09.698 21:20:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:09.698 21:20:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:09.698 21:20:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:09.698 21:20:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.698 21:20:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:09.698 21:20:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:09.698 21:20:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:09.698 21:20:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:09.698 21:20:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:09.698 21:20:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:09.698 Cannot find device "nvmf_tgt_br" 00:13:09.698 21:20:33 -- nvmf/common.sh@154 -- # true 00:13:09.698 21:20:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.698 Cannot find device "nvmf_tgt_br2" 00:13:09.698 21:20:33 -- nvmf/common.sh@155 -- # true 00:13:09.698 21:20:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:09.958 21:20:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:09.958 Cannot find device "nvmf_tgt_br" 00:13:09.958 21:20:33 -- nvmf/common.sh@157 -- # true 00:13:09.958 21:20:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:09.958 Cannot find device "nvmf_tgt_br2" 00:13:09.958 21:20:33 -- nvmf/common.sh@158 -- # true 00:13:09.958 21:20:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:09.958 21:20:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:09.958 21:20:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.958 21:20:33 -- nvmf/common.sh@161 -- # true 00:13:09.958 21:20:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.958 21:20:33 -- nvmf/common.sh@162 -- # true 00:13:09.958 21:20:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:09.958 21:20:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:09.958 21:20:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.958 21:20:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:09.958 21:20:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:09.958 21:20:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:09.958 21:20:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:09.958 21:20:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:09.958 21:20:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:09.958 21:20:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:09.958 21:20:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:09.958 21:20:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:09.958 21:20:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:09.958 21:20:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:09.958 21:20:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:09.958 21:20:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:09.958 21:20:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:09.958 21:20:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:09.958 21:20:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.958 21:20:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.958 21:20:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.958 21:20:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.958 21:20:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.958 21:20:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:09.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:09.958 00:13:09.958 --- 10.0.0.2 ping statistics --- 00:13:09.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.958 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:09.958 21:20:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:09.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:09.958 00:13:09.958 --- 10.0.0.3 ping statistics --- 00:13:09.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.958 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:09.958 21:20:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:09.958 00:13:09.958 --- 10.0.0.1 ping statistics --- 00:13:09.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.958 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:09.958 21:20:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.958 21:20:33 -- nvmf/common.sh@421 -- # return 0 00:13:09.958 21:20:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.958 21:20:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.958 21:20:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.959 21:20:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.959 21:20:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.959 21:20:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.959 21:20:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:10.218 21:20:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:10.218 21:20:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:10.218 21:20:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.218 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:13:10.218 21:20:33 -- nvmf/common.sh@469 -- # nvmfpid=79039 00:13:10.218 21:20:33 -- nvmf/common.sh@470 -- # waitforlisten 79039 00:13:10.218 21:20:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.218 21:20:33 -- common/autotest_common.sh@829 -- # '[' -z 79039 ']' 00:13:10.218 21:20:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.218 21:20:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.218 21:20:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.218 21:20:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.218 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:13:10.218 [2024-11-28 21:20:33.771097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:10.218 [2024-11-28 21:20:33.771263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.218 [2024-11-28 21:20:33.910528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.218 [2024-11-28 21:20:33.942945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.218 [2024-11-28 21:20:33.943122] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.218 [2024-11-28 21:20:33.943135] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.218 [2024-11-28 21:20:33.943142] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.218 [2024-11-28 21:20:33.943763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.218 [2024-11-28 21:20:33.943874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.218 [2024-11-28 21:20:33.943912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.218 [2024-11-28 21:20:33.943918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.205 21:20:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.205 21:20:34 -- common/autotest_common.sh@862 -- # return 0 00:13:11.205 21:20:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:11.205 21:20:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.205 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.205 21:20:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.205 21:20:34 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:11.205 21:20:34 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.205 21:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.205 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.205 Malloc0 00:13:11.205 21:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.205 21:20:34 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:11.205 21:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.205 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.205 Delay0 00:13:11.205 21:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.205 21:20:34 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.205 21:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.205 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.205 [2024-11-28 21:20:34.893644] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.205 21:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.205 21:20:34 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.205 21:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.205 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.482 21:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.482 21:20:34 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.482 21:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.482 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.482 21:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.482 21:20:34 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.482 21:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.482 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.482 [2024-11-28 21:20:34.921891] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.482 21:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.482 21:20:34 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.482 21:20:35 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.482 21:20:35 -- common/autotest_common.sh@1187 -- # local i=0 00:13:11.482 21:20:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.482 21:20:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:11.482 21:20:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:13.384 21:20:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:13.384 21:20:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.384 21:20:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:13.384 21:20:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:13.384 21:20:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.384 21:20:37 -- common/autotest_common.sh@1197 -- # return 0 00:13:13.384 21:20:37 -- target/initiator_timeout.sh@35 -- # fio_pid=79103 00:13:13.384 21:20:37 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:13.384 21:20:37 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:13.384 [global] 00:13:13.384 thread=1 00:13:13.384 invalidate=1 00:13:13.384 rw=write 00:13:13.384 time_based=1 00:13:13.384 runtime=60 00:13:13.384 ioengine=libaio 00:13:13.384 direct=1 00:13:13.384 bs=4096 00:13:13.384 iodepth=1 00:13:13.384 norandommap=0 00:13:13.384 numjobs=1 00:13:13.384 00:13:13.384 verify_dump=1 00:13:13.384 verify_backlog=512 00:13:13.384 verify_state_save=0 00:13:13.384 do_verify=1 00:13:13.384 verify=crc32c-intel 00:13:13.384 [job0] 00:13:13.384 filename=/dev/nvme0n1 00:13:13.384 Could not set queue depth (nvme0n1) 00:13:13.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:13.643 fio-3.35 00:13:13.643 Starting 1 thread 00:13:16.927 21:20:40 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:16.927 21:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.927 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:13:16.927 true 00:13:16.927 21:20:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.927 21:20:40 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:16.927 21:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.927 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:13:16.927 true 00:13:16.927 21:20:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.927 21:20:40 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:16.927 21:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.927 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:13:16.927 true 00:13:16.927 21:20:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.927 21:20:40 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:16.927 21:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.927 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:13:16.927 true 00:13:16.927 21:20:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.927 21:20:40 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:19.456 21:20:43 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:19.456 21:20:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 true 00:13:19.456 21:20:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 21:20:43 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:19.456 21:20:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 true 00:13:19.456 21:20:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 21:20:43 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:19.456 21:20:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 true 00:13:19.456 21:20:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 21:20:43 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:19.456 21:20:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 true 00:13:19.456 21:20:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 21:20:43 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:19.456 21:20:43 -- target/initiator_timeout.sh@54 -- # wait 79103 00:14:15.738 00:14:15.738 job0: (groupid=0, jobs=1): err= 0: pid=79124: Thu Nov 28 21:21:37 2024 00:14:15.738 read: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec) 00:14:15.738 slat (usec): min=10, max=225, avg=14.28, stdev= 4.75 00:14:15.738 clat (usec): min=157, max=40567k, avg=1082.02, stdev=187939.54 00:14:15.738 lat (usec): min=169, max=40567k, avg=1096.30, stdev=187939.57 00:14:15.738 clat percentiles (usec): 00:14:15.738 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:14:15.738 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:14:15.738 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 00:14:15.739 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 486], 99.95th=[ 545], 00:14:15.739 | 99.99th=[ 750] 00:14:15.739 write: IOPS=780, BW=3121KiB/s (3196kB/s)(183MiB/60000msec); 0 zone resets 00:14:15.739 slat (usec): min=13, max=11457, avg=22.70, stdev=63.91 00:14:15.739 clat (usec): min=88, max=3730, avg=164.69, stdev=38.94 00:14:15.739 lat (usec): min=134, max=11812, avg=187.40, stdev=75.81 00:14:15.739 clat percentiles (usec): 00:14:15.739 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 145], 00:14:15.739 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:14:15.739 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 204], 00:14:15.739 | 99.00th=[ 227], 99.50th=[ 251], 99.90th=[ 515], 99.95th=[ 619], 00:14:15.739 | 99.99th=[ 1319] 00:14:15.739 bw ( KiB/s): min= 4096, max=11728, per=100.00%, avg=9386.95, stdev=1659.11, samples=39 00:14:15.739 iops : min= 1024, max= 2932, avg=2346.72, stdev=414.79, samples=39 00:14:15.739 lat (usec) : 100=0.01%, 250=96.69%, 500=3.22%, 750=0.07%, 1000=0.01% 00:14:15.739 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:14:15.739 cpu : usr=0.55%, sys=2.20%, ctx=93429, majf=0, minf=5 00:14:15.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.739 issued rwts: total=46592,46819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.739 00:14:15.739 Run status group 0 (all jobs): 00:14:15.739 READ: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:14:15.739 WRITE: bw=3121KiB/s (3196kB/s), 3121KiB/s-3121KiB/s (3196kB/s-3196kB/s), io=183MiB (192MB), run=60000-60000msec 00:14:15.739 00:14:15.739 Disk stats (read/write): 00:14:15.739 nvme0n1: ios=46561/46592, merge=0/0, ticks=10432/8504, in_queue=18936, util=99.64% 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.739 21:21:37 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.739 21:21:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.739 21:21:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.739 21:21:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.739 21:21:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.739 21:21:37 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:15.739 nvmf hotplug test: fio successful as expected 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.739 21:21:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.739 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 21:21:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:15.739 21:21:37 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:15.739 21:21:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.739 21:21:37 -- nvmf/common.sh@116 -- # sync 00:14:15.739 21:21:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:15.739 21:21:37 -- nvmf/common.sh@119 -- # set +e 00:14:15.739 21:21:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.739 21:21:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:15.739 rmmod nvme_tcp 00:14:15.739 rmmod nvme_fabrics 00:14:15.739 rmmod nvme_keyring 00:14:15.739 21:21:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.739 21:21:37 -- nvmf/common.sh@123 -- # set -e 00:14:15.739 21:21:37 -- nvmf/common.sh@124 -- # return 0 00:14:15.739 21:21:37 -- nvmf/common.sh@477 -- # '[' -n 79039 ']' 00:14:15.739 21:21:37 -- nvmf/common.sh@478 -- # killprocess 79039 00:14:15.739 21:21:37 -- common/autotest_common.sh@936 -- # '[' -z 79039 ']' 00:14:15.739 21:21:37 -- common/autotest_common.sh@940 -- # kill -0 79039 00:14:15.739 21:21:37 -- common/autotest_common.sh@941 -- # uname 00:14:15.739 21:21:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.739 21:21:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79039 00:14:15.739 killing process with pid 79039 00:14:15.739 21:21:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:15.739 21:21:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:15.739 21:21:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79039' 00:14:15.739 21:21:37 -- common/autotest_common.sh@955 -- # kill 79039 00:14:15.739 21:21:37 -- common/autotest_common.sh@960 -- # wait 79039 00:14:15.739 21:21:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:15.739 21:21:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:15.739 21:21:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:15.739 21:21:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.739 21:21:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:15.739 21:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.739 21:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.739 21:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.739 21:21:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:15.739 ************************************ 00:14:15.739 END TEST nvmf_initiator_timeout 00:14:15.739 ************************************ 00:14:15.739 00:14:15.739 real 1m4.590s 00:14:15.739 user 3m53.403s 00:14:15.739 sys 0m21.983s 00:14:15.739 21:21:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.739 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 21:21:37 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:15.739 21:21:37 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:15.739 21:21:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.739 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 21:21:37 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:15.739 21:21:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.739 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 21:21:37 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:15.739 21:21:37 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:15.739 21:21:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.739 21:21:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.739 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 ************************************ 00:14:15.739 START TEST nvmf_identify 00:14:15.739 ************************************ 00:14:15.739 21:21:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:15.739 * Looking for test storage... 00:14:15.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:15.739 21:21:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:15.739 21:21:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:15.739 21:21:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:15.739 21:21:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:15.739 21:21:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:15.739 21:21:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:15.739 21:21:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:15.739 21:21:38 -- scripts/common.sh@335 -- # IFS=.-: 00:14:15.739 21:21:38 -- scripts/common.sh@335 -- # read -ra ver1 00:14:15.739 21:21:38 -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.739 21:21:38 -- scripts/common.sh@336 -- # read -ra ver2 00:14:15.739 21:21:38 -- scripts/common.sh@337 -- # local 'op=<' 00:14:15.739 21:21:38 -- scripts/common.sh@339 -- # ver1_l=2 00:14:15.739 21:21:38 -- scripts/common.sh@340 -- # ver2_l=1 00:14:15.739 21:21:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:15.739 21:21:38 -- scripts/common.sh@343 -- # case "$op" in 00:14:15.739 21:21:38 -- scripts/common.sh@344 -- # : 1 00:14:15.739 21:21:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:15.739 21:21:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.739 21:21:38 -- scripts/common.sh@364 -- # decimal 1 00:14:15.739 21:21:38 -- scripts/common.sh@352 -- # local d=1 00:14:15.739 21:21:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.739 21:21:38 -- scripts/common.sh@354 -- # echo 1 00:14:15.739 21:21:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:15.739 21:21:38 -- scripts/common.sh@365 -- # decimal 2 00:14:15.739 21:21:38 -- scripts/common.sh@352 -- # local d=2 00:14:15.739 21:21:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.739 21:21:38 -- scripts/common.sh@354 -- # echo 2 00:14:15.739 21:21:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:15.739 21:21:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:15.739 21:21:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:15.739 21:21:38 -- scripts/common.sh@367 -- # return 0 00:14:15.739 21:21:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.739 21:21:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.739 --rc genhtml_branch_coverage=1 00:14:15.739 --rc genhtml_function_coverage=1 00:14:15.739 --rc genhtml_legend=1 00:14:15.739 --rc geninfo_all_blocks=1 00:14:15.739 --rc geninfo_unexecuted_blocks=1 00:14:15.739 00:14:15.739 ' 00:14:15.739 21:21:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.739 --rc genhtml_branch_coverage=1 00:14:15.739 --rc genhtml_function_coverage=1 00:14:15.739 --rc genhtml_legend=1 00:14:15.739 --rc geninfo_all_blocks=1 00:14:15.739 --rc geninfo_unexecuted_blocks=1 00:14:15.739 00:14:15.739 ' 00:14:15.739 21:21:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.739 --rc genhtml_branch_coverage=1 00:14:15.739 --rc genhtml_function_coverage=1 00:14:15.740 --rc genhtml_legend=1 00:14:15.740 --rc geninfo_all_blocks=1 00:14:15.740 --rc geninfo_unexecuted_blocks=1 00:14:15.740 00:14:15.740 ' 00:14:15.740 21:21:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:15.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.740 --rc genhtml_branch_coverage=1 00:14:15.740 --rc genhtml_function_coverage=1 00:14:15.740 --rc genhtml_legend=1 00:14:15.740 --rc geninfo_all_blocks=1 00:14:15.740 --rc geninfo_unexecuted_blocks=1 00:14:15.740 00:14:15.740 ' 00:14:15.740 21:21:38 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.740 21:21:38 -- nvmf/common.sh@7 -- # uname -s 00:14:15.740 21:21:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.740 21:21:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.740 21:21:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.740 21:21:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.740 21:21:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.740 21:21:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.740 21:21:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.740 21:21:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.740 21:21:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.740 21:21:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.740 21:21:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:14:15.740 21:21:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:14:15.740 21:21:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.740 21:21:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.740 21:21:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.740 21:21:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.740 21:21:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.740 21:21:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.740 21:21:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.740 21:21:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.740 21:21:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.740 21:21:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.740 21:21:38 -- paths/export.sh@5 -- # export PATH 00:14:15.740 21:21:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.740 21:21:38 -- nvmf/common.sh@46 -- # : 0 00:14:15.740 21:21:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.740 21:21:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.740 21:21:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.740 21:21:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.740 21:21:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.740 21:21:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.740 21:21:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.740 21:21:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.740 21:21:38 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.740 21:21:38 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.740 21:21:38 -- host/identify.sh@14 -- # nvmftestinit 00:14:15.740 21:21:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:15.740 21:21:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.740 21:21:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.740 21:21:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.740 21:21:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.740 21:21:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.740 21:21:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.740 21:21:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.740 21:21:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:15.740 21:21:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:15.740 21:21:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:15.740 21:21:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:15.740 21:21:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:15.740 21:21:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:15.740 21:21:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.740 21:21:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.740 21:21:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:15.740 21:21:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:15.740 21:21:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.740 21:21:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.740 21:21:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.740 21:21:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.740 21:21:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.740 21:21:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.740 21:21:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.740 21:21:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.740 21:21:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:15.740 21:21:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:15.740 Cannot find device "nvmf_tgt_br" 00:14:15.740 21:21:38 -- nvmf/common.sh@154 -- # true 00:14:15.740 21:21:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.740 Cannot find device "nvmf_tgt_br2" 00:14:15.740 21:21:38 -- nvmf/common.sh@155 -- # true 00:14:15.740 21:21:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:15.740 21:21:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:15.740 Cannot find device "nvmf_tgt_br" 00:14:15.740 21:21:38 -- nvmf/common.sh@157 -- # true 00:14:15.740 21:21:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:15.740 Cannot find device "nvmf_tgt_br2" 00:14:15.740 21:21:38 -- nvmf/common.sh@158 -- # true 00:14:15.740 21:21:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:15.740 21:21:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:15.740 21:21:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.740 21:21:38 -- nvmf/common.sh@161 -- # true 00:14:15.740 21:21:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.740 21:21:38 -- nvmf/common.sh@162 -- # true 00:14:15.740 21:21:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.740 21:21:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.740 21:21:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.740 21:21:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.740 21:21:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.740 21:21:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.740 21:21:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.740 21:21:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:15.740 21:21:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:15.740 21:21:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:15.740 21:21:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:15.740 21:21:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:15.740 21:21:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:15.740 21:21:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.740 21:21:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.740 21:21:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.740 21:21:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:15.740 21:21:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:15.740 21:21:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.740 21:21:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.740 21:21:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.740 21:21:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.740 21:21:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.740 21:21:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:15.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:15.740 00:14:15.740 --- 10.0.0.2 ping statistics --- 00:14:15.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.740 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:15.740 21:21:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:15.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:14:15.740 00:14:15.740 --- 10.0.0.3 ping statistics --- 00:14:15.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.740 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:15.740 21:21:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:15.740 00:14:15.740 --- 10.0.0.1 ping statistics --- 00:14:15.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.741 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:15.741 21:21:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.741 21:21:38 -- nvmf/common.sh@421 -- # return 0 00:14:15.741 21:21:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:15.741 21:21:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.741 21:21:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:15.741 21:21:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:15.741 21:21:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.741 21:21:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:15.741 21:21:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:15.741 21:21:38 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:15.741 21:21:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.741 21:21:38 -- common/autotest_common.sh@10 -- # set +x 00:14:15.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.741 21:21:38 -- host/identify.sh@19 -- # nvmfpid=79971 00:14:15.741 21:21:38 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:15.741 21:21:38 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.741 21:21:38 -- host/identify.sh@23 -- # waitforlisten 79971 00:14:15.741 21:21:38 -- common/autotest_common.sh@829 -- # '[' -z 79971 ']' 00:14:15.741 21:21:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.741 21:21:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.741 21:21:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.741 21:21:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.741 21:21:38 -- common/autotest_common.sh@10 -- # set +x 00:14:15.741 [2024-11-28 21:21:38.465487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:15.741 [2024-11-28 21:21:38.465806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.741 [2024-11-28 21:21:38.608863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.741 [2024-11-28 21:21:38.640687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:15.741 [2024-11-28 21:21:38.641097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.741 [2024-11-28 21:21:38.641149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.741 [2024-11-28 21:21:38.641300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.741 [2024-11-28 21:21:38.641509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.741 [2024-11-28 21:21:38.641708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.741 [2024-11-28 21:21:38.641778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.741 [2024-11-28 21:21:38.641778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.741 21:21:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.741 21:21:39 -- common/autotest_common.sh@862 -- # return 0 00:14:15.741 21:21:39 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.741 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.741 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:15.741 [2024-11-28 21:21:39.446441] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.741 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.741 21:21:39 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:15.741 21:21:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.741 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 21:21:39 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:16.000 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.000 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 Malloc0 00:14:16.000 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.000 21:21:39 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:16.000 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.000 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.000 21:21:39 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:16.000 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.000 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.000 21:21:39 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.000 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.000 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 [2024-11-28 21:21:39.541429] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.000 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.000 21:21:39 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:16.000 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.000 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.000 21:21:39 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:16.000 21:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.000 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 [2024-11-28 21:21:39.557214] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:16.000 [ 00:14:16.000 { 00:14:16.000 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:16.000 "subtype": "Discovery", 00:14:16.000 "listen_addresses": [ 00:14:16.000 { 00:14:16.000 "transport": "TCP", 00:14:16.000 "trtype": "TCP", 00:14:16.000 "adrfam": "IPv4", 00:14:16.000 "traddr": "10.0.0.2", 00:14:16.000 "trsvcid": "4420" 00:14:16.000 } 00:14:16.000 ], 00:14:16.000 "allow_any_host": true, 00:14:16.000 "hosts": [] 00:14:16.000 }, 00:14:16.000 { 00:14:16.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.000 "subtype": "NVMe", 00:14:16.000 "listen_addresses": [ 00:14:16.000 { 00:14:16.000 "transport": "TCP", 00:14:16.000 "trtype": "TCP", 00:14:16.000 "adrfam": "IPv4", 00:14:16.000 "traddr": "10.0.0.2", 00:14:16.000 "trsvcid": "4420" 00:14:16.000 } 00:14:16.000 ], 00:14:16.000 "allow_any_host": true, 00:14:16.000 "hosts": [], 00:14:16.000 "serial_number": "SPDK00000000000001", 00:14:16.000 "model_number": "SPDK bdev Controller", 00:14:16.000 "max_namespaces": 32, 00:14:16.000 "min_cntlid": 1, 00:14:16.000 "max_cntlid": 65519, 00:14:16.000 "namespaces": [ 00:14:16.000 { 00:14:16.000 "nsid": 1, 00:14:16.000 "bdev_name": "Malloc0", 00:14:16.000 "name": "Malloc0", 00:14:16.000 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:16.000 "eui64": "ABCDEF0123456789", 00:14:16.000 "uuid": "375d4c37-8506-403e-8464-5ffa481fb3c7" 00:14:16.000 } 00:14:16.000 ] 00:14:16.000 } 00:14:16.000 ] 00:14:16.000 21:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.000 21:21:39 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:16.000 [2024-11-28 21:21:39.596508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:16.000 [2024-11-28 21:21:39.596555] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80006 ] 00:14:16.000 [2024-11-28 21:21:39.736335] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:16.001 [2024-11-28 21:21:39.736424] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:16.001 [2024-11-28 21:21:39.736430] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:16.001 [2024-11-28 21:21:39.736442] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:16.001 [2024-11-28 21:21:39.736453] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:16.001 [2024-11-28 21:21:39.736579] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:16.001 [2024-11-28 21:21:39.736650] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11d7540 0 00:14:16.001 [2024-11-28 21:21:39.738113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:16.001 [2024-11-28 21:21:39.738138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:16.001 [2024-11-28 21:21:39.738145] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:16.001 [2024-11-28 21:21:39.738149] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:16.001 [2024-11-28 21:21:39.738193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.001 [2024-11-28 21:21:39.738201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.001 [2024-11-28 21:21:39.738206] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.001 [2024-11-28 21:21:39.738220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:16.001 [2024-11-28 21:21:39.738248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.745070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.745093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.745115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.745138] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:16.265 [2024-11-28 21:21:39.745146] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:16.265 [2024-11-28 21:21:39.745153] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:16.265 [2024-11-28 21:21:39.745183] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.745204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.745246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.745320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.745327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.745331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745335] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.745341] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:16.265 [2024-11-28 21:21:39.745349] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:16.265 [2024-11-28 21:21:39.745357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745381] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.745388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.745407] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.745476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.745483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.745487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745491] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.745498] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:16.265 [2024-11-28 21:21:39.745507] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:16.265 [2024-11-28 21:21:39.745515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.745547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.745566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.745617] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.745624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.745628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.745640] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:16.265 [2024-11-28 21:21:39.745651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.745668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.745687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.745739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.745746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.745750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.745760] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:16.265 [2024-11-28 21:21:39.745765] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:16.265 [2024-11-28 21:21:39.745774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:16.265 [2024-11-28 21:21:39.745880] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:16.265 [2024-11-28 21:21:39.745885] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:16.265 [2024-11-28 21:21:39.745895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.745903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.745911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.745931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.745989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.745998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.746031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746052] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.746059] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:16.265 [2024-11-28 21:21:39.746071] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.746089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.746109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.746172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.265 [2024-11-28 21:21:39.746179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.265 [2024-11-28 21:21:39.746183] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.265 [2024-11-28 21:21:39.746193] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:16.265 [2024-11-28 21:21:39.746199] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:16.265 [2024-11-28 21:21:39.746207] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:16.265 [2024-11-28 21:21:39.746223] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:16.265 [2024-11-28 21:21:39.746234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.265 [2024-11-28 21:21:39.746251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.265 [2024-11-28 21:21:39.746272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.265 [2024-11-28 21:21:39.746375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.265 [2024-11-28 21:21:39.746383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.265 [2024-11-28 21:21:39.746387] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.265 [2024-11-28 21:21:39.746391] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d7540): datao=0, datal=4096, cccid=0 00:14:16.265 [2024-11-28 21:21:39.746396] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1210220) on tqpair(0x11d7540): expected_datao=0, payload_size=4096 00:14:16.266 [2024-11-28 21:21:39.746406] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746427] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.266 [2024-11-28 21:21:39.746443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.266 [2024-11-28 21:21:39.746447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.266 [2024-11-28 21:21:39.746461] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:16.266 [2024-11-28 21:21:39.746467] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:16.266 [2024-11-28 21:21:39.746472] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:16.266 [2024-11-28 21:21:39.746477] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:16.266 [2024-11-28 21:21:39.746483] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:16.266 [2024-11-28 21:21:39.746488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:16.266 [2024-11-28 21:21:39.746502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:16.266 [2024-11-28 21:21:39.746511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.266 [2024-11-28 21:21:39.746548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.266 [2024-11-28 21:21:39.746607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.266 [2024-11-28 21:21:39.746614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.266 [2024-11-28 21:21:39.746618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210220) on tqpair=0x11d7540 00:14:16.266 [2024-11-28 21:21:39.746646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746651] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.266 [2024-11-28 21:21:39.746668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.266 [2024-11-28 21:21:39.746689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746693] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.266 [2024-11-28 21:21:39.746709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.266 [2024-11-28 21:21:39.746729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:16.266 [2024-11-28 21:21:39.746742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:16.266 [2024-11-28 21:21:39.746749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746758] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.266 [2024-11-28 21:21:39.746786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210220, cid 0, qid 0 00:14:16.266 [2024-11-28 21:21:39.746794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210380, cid 1, qid 0 00:14:16.266 [2024-11-28 21:21:39.746799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12104e0, cid 2, qid 0 00:14:16.266 [2024-11-28 21:21:39.746804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.266 [2024-11-28 21:21:39.746809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12107a0, cid 4, qid 0 00:14:16.266 [2024-11-28 21:21:39.746900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.266 [2024-11-28 21:21:39.746907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.266 [2024-11-28 21:21:39.746911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12107a0) on tqpair=0x11d7540 00:14:16.266 [2024-11-28 21:21:39.746937] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:16.266 [2024-11-28 21:21:39.746942] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:16.266 [2024-11-28 21:21:39.746953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.746962] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.746969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.266 [2024-11-28 21:21:39.746987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12107a0, cid 4, qid 0 00:14:16.266 [2024-11-28 21:21:39.747052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.266 [2024-11-28 21:21:39.747059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.266 [2024-11-28 21:21:39.747063] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747067] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d7540): datao=0, datal=4096, cccid=4 00:14:16.266 [2024-11-28 21:21:39.747071] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12107a0) on tqpair(0x11d7540): expected_datao=0, payload_size=4096 00:14:16.266 [2024-11-28 21:21:39.747092] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747097] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.266 [2024-11-28 21:21:39.747111] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.266 [2024-11-28 21:21:39.747115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747119] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12107a0) on tqpair=0x11d7540 00:14:16.266 [2024-11-28 21:21:39.747132] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:16.266 [2024-11-28 21:21:39.747184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.747219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.266 [2024-11-28 21:21:39.747227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11d7540) 00:14:16.266 [2024-11-28 21:21:39.747242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.266 [2024-11-28 21:21:39.747269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12107a0, cid 4, qid 0 00:14:16.266 [2024-11-28 21:21:39.747277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210900, cid 5, qid 0 00:14:16.266 [2024-11-28 21:21:39.747391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.266 [2024-11-28 21:21:39.747399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.266 [2024-11-28 21:21:39.747403] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747407] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d7540): datao=0, datal=1024, cccid=4 00:14:16.266 [2024-11-28 21:21:39.747413] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12107a0) on tqpair(0x11d7540): expected_datao=0, payload_size=1024 00:14:16.266 [2024-11-28 21:21:39.747421] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747425] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.266 [2024-11-28 21:21:39.747438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.266 [2024-11-28 21:21:39.747442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210900) on tqpair=0x11d7540 00:14:16.266 [2024-11-28 21:21:39.747465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.266 [2024-11-28 21:21:39.747473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.266 [2024-11-28 21:21:39.747477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12107a0) on tqpair=0x11d7540 00:14:16.266 [2024-11-28 21:21:39.747513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.266 [2024-11-28 21:21:39.747522] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d7540) 00:14:16.267 [2024-11-28 21:21:39.747530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.267 [2024-11-28 21:21:39.747569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12107a0, cid 4, qid 0 00:14:16.267 [2024-11-28 21:21:39.747636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.267 [2024-11-28 21:21:39.747643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.267 [2024-11-28 21:21:39.747647] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747651] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d7540): datao=0, datal=3072, cccid=4 00:14:16.267 [2024-11-28 21:21:39.747656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12107a0) on tqpair(0x11d7540): expected_datao=0, payload_size=3072 00:14:16.267 [2024-11-28 21:21:39.747663] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747667] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.267 [2024-11-28 21:21:39.747681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.267 [2024-11-28 21:21:39.747685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747689] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12107a0) on tqpair=0x11d7540 00:14:16.267 [2024-11-28 21:21:39.747699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d7540) 00:14:16.267 [2024-11-28 21:21:39.747714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.267 [2024-11-28 21:21:39.747737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12107a0, cid 4, qid 0 00:14:16.267 [2024-11-28 21:21:39.747799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.267 [2024-11-28 21:21:39.747806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.267 [2024-11-28 21:21:39.747809] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747813] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d7540): datao=0, datal=8, cccid=4 00:14:16.267 [2024-11-28 21:21:39.747818] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12107a0) on tqpair(0x11d7540): expected_datao=0, payload_size=8 00:14:16.267 [2024-11-28 21:21:39.747825] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747829] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.267 [2024-11-28 21:21:39.747851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.267 [2024-11-28 21:21:39.747855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.267 [2024-11-28 21:21:39.747859] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12107a0) on tqpair=0x11d7540 00:14:16.267 ===================================================== 00:14:16.267 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:16.267 ===================================================== 00:14:16.267 Controller Capabilities/Features 00:14:16.267 ================================ 00:14:16.267 Vendor ID: 0000 00:14:16.267 Subsystem Vendor ID: 0000 00:14:16.267 Serial Number: .................... 00:14:16.267 Model Number: ........................................ 00:14:16.267 Firmware Version: 24.01.1 00:14:16.267 Recommended Arb Burst: 0 00:14:16.267 IEEE OUI Identifier: 00 00 00 00:14:16.267 Multi-path I/O 00:14:16.267 May have multiple subsystem ports: No 00:14:16.267 May have multiple controllers: No 00:14:16.267 Associated with SR-IOV VF: No 00:14:16.267 Max Data Transfer Size: 131072 00:14:16.267 Max Number of Namespaces: 0 00:14:16.267 Max Number of I/O Queues: 1024 00:14:16.267 NVMe Specification Version (VS): 1.3 00:14:16.267 NVMe Specification Version (Identify): 1.3 00:14:16.267 Maximum Queue Entries: 128 00:14:16.267 Contiguous Queues Required: Yes 00:14:16.267 Arbitration Mechanisms Supported 00:14:16.267 Weighted Round Robin: Not Supported 00:14:16.267 Vendor Specific: Not Supported 00:14:16.267 Reset Timeout: 15000 ms 00:14:16.267 Doorbell Stride: 4 bytes 00:14:16.267 NVM Subsystem Reset: Not Supported 00:14:16.267 Command Sets Supported 00:14:16.267 NVM Command Set: Supported 00:14:16.267 Boot Partition: Not Supported 00:14:16.267 Memory Page Size Minimum: 4096 bytes 00:14:16.267 Memory Page Size Maximum: 4096 bytes 00:14:16.267 Persistent Memory Region: Not Supported 00:14:16.267 Optional Asynchronous Events Supported 00:14:16.267 Namespace Attribute Notices: Not Supported 00:14:16.267 Firmware Activation Notices: Not Supported 00:14:16.267 ANA Change Notices: Not Supported 00:14:16.267 PLE Aggregate Log Change Notices: Not Supported 00:14:16.267 LBA Status Info Alert Notices: Not Supported 00:14:16.267 EGE Aggregate Log Change Notices: Not Supported 00:14:16.267 Normal NVM Subsystem Shutdown event: Not Supported 00:14:16.267 Zone Descriptor Change Notices: Not Supported 00:14:16.267 Discovery Log Change Notices: Supported 00:14:16.267 Controller Attributes 00:14:16.267 128-bit Host Identifier: Not Supported 00:14:16.267 Non-Operational Permissive Mode: Not Supported 00:14:16.267 NVM Sets: Not Supported 00:14:16.267 Read Recovery Levels: Not Supported 00:14:16.267 Endurance Groups: Not Supported 00:14:16.267 Predictable Latency Mode: Not Supported 00:14:16.267 Traffic Based Keep ALive: Not Supported 00:14:16.267 Namespace Granularity: Not Supported 00:14:16.267 SQ Associations: Not Supported 00:14:16.267 UUID List: Not Supported 00:14:16.267 Multi-Domain Subsystem: Not Supported 00:14:16.267 Fixed Capacity Management: Not Supported 00:14:16.267 Variable Capacity Management: Not Supported 00:14:16.267 Delete Endurance Group: Not Supported 00:14:16.267 Delete NVM Set: Not Supported 00:14:16.267 Extended LBA Formats Supported: Not Supported 00:14:16.267 Flexible Data Placement Supported: Not Supported 00:14:16.267 00:14:16.267 Controller Memory Buffer Support 00:14:16.267 ================================ 00:14:16.267 Supported: No 00:14:16.267 00:14:16.267 Persistent Memory Region Support 00:14:16.267 ================================ 00:14:16.267 Supported: No 00:14:16.267 00:14:16.267 Admin Command Set Attributes 00:14:16.267 ============================ 00:14:16.267 Security Send/Receive: Not Supported 00:14:16.267 Format NVM: Not Supported 00:14:16.267 Firmware Activate/Download: Not Supported 00:14:16.267 Namespace Management: Not Supported 00:14:16.267 Device Self-Test: Not Supported 00:14:16.267 Directives: Not Supported 00:14:16.267 NVMe-MI: Not Supported 00:14:16.267 Virtualization Management: Not Supported 00:14:16.267 Doorbell Buffer Config: Not Supported 00:14:16.267 Get LBA Status Capability: Not Supported 00:14:16.267 Command & Feature Lockdown Capability: Not Supported 00:14:16.267 Abort Command Limit: 1 00:14:16.267 Async Event Request Limit: 4 00:14:16.267 Number of Firmware Slots: N/A 00:14:16.267 Firmware Slot 1 Read-Only: N/A 00:14:16.267 Firmware Activation Without Reset: N/A 00:14:16.267 Multiple Update Detection Support: N/A 00:14:16.267 Firmware Update Granularity: No Information Provided 00:14:16.267 Per-Namespace SMART Log: No 00:14:16.267 Asymmetric Namespace Access Log Page: Not Supported 00:14:16.267 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:16.267 Command Effects Log Page: Not Supported 00:14:16.267 Get Log Page Extended Data: Supported 00:14:16.267 Telemetry Log Pages: Not Supported 00:14:16.267 Persistent Event Log Pages: Not Supported 00:14:16.267 Supported Log Pages Log Page: May Support 00:14:16.267 Commands Supported & Effects Log Page: Not Supported 00:14:16.267 Feature Identifiers & Effects Log Page:May Support 00:14:16.267 NVMe-MI Commands & Effects Log Page: May Support 00:14:16.267 Data Area 4 for Telemetry Log: Not Supported 00:14:16.267 Error Log Page Entries Supported: 128 00:14:16.267 Keep Alive: Not Supported 00:14:16.267 00:14:16.267 NVM Command Set Attributes 00:14:16.267 ========================== 00:14:16.267 Submission Queue Entry Size 00:14:16.267 Max: 1 00:14:16.267 Min: 1 00:14:16.267 Completion Queue Entry Size 00:14:16.267 Max: 1 00:14:16.267 Min: 1 00:14:16.267 Number of Namespaces: 0 00:14:16.267 Compare Command: Not Supported 00:14:16.267 Write Uncorrectable Command: Not Supported 00:14:16.267 Dataset Management Command: Not Supported 00:14:16.267 Write Zeroes Command: Not Supported 00:14:16.267 Set Features Save Field: Not Supported 00:14:16.267 Reservations: Not Supported 00:14:16.267 Timestamp: Not Supported 00:14:16.267 Copy: Not Supported 00:14:16.267 Volatile Write Cache: Not Present 00:14:16.267 Atomic Write Unit (Normal): 1 00:14:16.267 Atomic Write Unit (PFail): 1 00:14:16.267 Atomic Compare & Write Unit: 1 00:14:16.267 Fused Compare & Write: Supported 00:14:16.267 Scatter-Gather List 00:14:16.267 SGL Command Set: Supported 00:14:16.267 SGL Keyed: Supported 00:14:16.267 SGL Bit Bucket Descriptor: Not Supported 00:14:16.267 SGL Metadata Pointer: Not Supported 00:14:16.267 Oversized SGL: Not Supported 00:14:16.267 SGL Metadata Address: Not Supported 00:14:16.267 SGL Offset: Supported 00:14:16.267 Transport SGL Data Block: Not Supported 00:14:16.267 Replay Protected Memory Block: Not Supported 00:14:16.267 00:14:16.267 Firmware Slot Information 00:14:16.267 ========================= 00:14:16.267 Active slot: 0 00:14:16.267 00:14:16.267 00:14:16.267 Error Log 00:14:16.267 ========= 00:14:16.267 00:14:16.268 Active Namespaces 00:14:16.268 ================= 00:14:16.268 Discovery Log Page 00:14:16.268 ================== 00:14:16.268 Generation Counter: 2 00:14:16.268 Number of Records: 2 00:14:16.268 Record Format: 0 00:14:16.268 00:14:16.268 Discovery Log Entry 0 00:14:16.268 ---------------------- 00:14:16.268 Transport Type: 3 (TCP) 00:14:16.268 Address Family: 1 (IPv4) 00:14:16.268 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:16.268 Entry Flags: 00:14:16.268 Duplicate Returned Information: 1 00:14:16.268 Explicit Persistent Connection Support for Discovery: 1 00:14:16.268 Transport Requirements: 00:14:16.268 Secure Channel: Not Required 00:14:16.268 Port ID: 0 (0x0000) 00:14:16.268 Controller ID: 65535 (0xffff) 00:14:16.268 Admin Max SQ Size: 128 00:14:16.268 Transport Service Identifier: 4420 00:14:16.268 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:16.268 Transport Address: 10.0.0.2 00:14:16.268 Discovery Log Entry 1 00:14:16.268 ---------------------- 00:14:16.268 Transport Type: 3 (TCP) 00:14:16.268 Address Family: 1 (IPv4) 00:14:16.268 Subsystem Type: 2 (NVM Subsystem) 00:14:16.268 Entry Flags: 00:14:16.268 Duplicate Returned Information: 0 00:14:16.268 Explicit Persistent Connection Support for Discovery: 0 00:14:16.268 Transport Requirements: 00:14:16.268 Secure Channel: Not Required 00:14:16.268 Port ID: 0 (0x0000) 00:14:16.268 Controller ID: 65535 (0xffff) 00:14:16.268 Admin Max SQ Size: 128 00:14:16.268 Transport Service Identifier: 4420 00:14:16.268 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:16.268 Transport Address: 10.0.0.2 [2024-11-28 21:21:39.747968] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:16.268 [2024-11-28 21:21:39.747987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.268 [2024-11-28 21:21:39.747995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.268 [2024-11-28 21:21:39.748013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.268 [2024-11-28 21:21:39.748021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.268 [2024-11-28 21:21:39.748032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748279] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:16.268 [2024-11-28 21:21:39.748284] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:16.268 [2024-11-28 21:21:39.748293] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748298] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748302] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748384] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748432] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748488] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748496] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748539] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748596] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748600] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748697] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748701] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748724] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748828] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.268 [2024-11-28 21:21:39.748860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.268 [2024-11-28 21:21:39.748877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.268 [2024-11-28 21:21:39.748928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.268 [2024-11-28 21:21:39.748935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.268 [2024-11-28 21:21:39.748939] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748943] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.268 [2024-11-28 21:21:39.748954] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748959] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.268 [2024-11-28 21:21:39.748963] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.269 [2024-11-28 21:21:39.748970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.269 [2024-11-28 21:21:39.748988] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.269 [2024-11-28 21:21:39.753034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.753056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.753078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.753082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.269 [2024-11-28 21:21:39.753097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.753102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.753106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d7540) 00:14:16.269 [2024-11-28 21:21:39.753115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.269 [2024-11-28 21:21:39.753141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210640, cid 3, qid 0 00:14:16.269 [2024-11-28 21:21:39.753194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.753216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.753219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.753223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1210640) on tqpair=0x11d7540 00:14:16.269 [2024-11-28 21:21:39.753232] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:16.269 00:14:16.269 21:21:39 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:16.269 [2024-11-28 21:21:39.788687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:16.269 [2024-11-28 21:21:39.788725] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80014 ] 00:14:16.269 [2024-11-28 21:21:39.930912] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:16.269 [2024-11-28 21:21:39.930974] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:16.269 [2024-11-28 21:21:39.930982] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:16.269 [2024-11-28 21:21:39.930995] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:16.269 [2024-11-28 21:21:39.931017] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:16.269 [2024-11-28 21:21:39.931172] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:16.269 [2024-11-28 21:21:39.931230] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18c2540 0 00:14:16.269 [2024-11-28 21:21:39.940085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:16.269 [2024-11-28 21:21:39.940126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:16.269 [2024-11-28 21:21:39.940149] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:16.269 [2024-11-28 21:21:39.940167] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:16.269 [2024-11-28 21:21:39.940224] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.940232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.940236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.269 [2024-11-28 21:21:39.940249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:16.269 [2024-11-28 21:21:39.940279] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.269 [2024-11-28 21:21:39.942072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.942095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.942101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.269 [2024-11-28 21:21:39.942123] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:16.269 [2024-11-28 21:21:39.942131] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:16.269 [2024-11-28 21:21:39.942138] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:16.269 [2024-11-28 21:21:39.942155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.269 [2024-11-28 21:21:39.942175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.269 [2024-11-28 21:21:39.942202] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.269 [2024-11-28 21:21:39.942285] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.942293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.942297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.269 [2024-11-28 21:21:39.942308] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:16.269 [2024-11-28 21:21:39.942317] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:16.269 [2024-11-28 21:21:39.942325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.269 [2024-11-28 21:21:39.942342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.269 [2024-11-28 21:21:39.942362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.269 [2024-11-28 21:21:39.942636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.942651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.942656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.269 [2024-11-28 21:21:39.942668] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:16.269 [2024-11-28 21:21:39.942678] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:16.269 [2024-11-28 21:21:39.942686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.269 [2024-11-28 21:21:39.942702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.269 [2024-11-28 21:21:39.942723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.269 [2024-11-28 21:21:39.942781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.942788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.942792] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942796] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.269 [2024-11-28 21:21:39.942803] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:16.269 [2024-11-28 21:21:39.942814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.942823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.269 [2024-11-28 21:21:39.942831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.269 [2024-11-28 21:21:39.942850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.269 [2024-11-28 21:21:39.943227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.269 [2024-11-28 21:21:39.943245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.269 [2024-11-28 21:21:39.943250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.269 [2024-11-28 21:21:39.943254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.269 [2024-11-28 21:21:39.943260] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:16.270 [2024-11-28 21:21:39.943266] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:16.270 [2024-11-28 21:21:39.943275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:16.270 [2024-11-28 21:21:39.943382] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:16.270 [2024-11-28 21:21:39.943387] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:16.270 [2024-11-28 21:21:39.943397] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.943401] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.943405] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.943413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.270 [2024-11-28 21:21:39.943436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.270 [2024-11-28 21:21:39.943956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.270 [2024-11-28 21:21:39.943970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.270 [2024-11-28 21:21:39.943974] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.943979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.270 [2024-11-28 21:21:39.943985] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:16.270 [2024-11-28 21:21:39.943996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.944026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.270 [2024-11-28 21:21:39.944047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.270 [2024-11-28 21:21:39.944106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.270 [2024-11-28 21:21:39.944114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.270 [2024-11-28 21:21:39.944118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.270 [2024-11-28 21:21:39.944128] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:16.270 [2024-11-28 21:21:39.944133] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.944142] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:16.270 [2024-11-28 21:21:39.944174] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.944184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.944218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.270 [2024-11-28 21:21:39.944239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.270 [2024-11-28 21:21:39.944629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.270 [2024-11-28 21:21:39.944644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.270 [2024-11-28 21:21:39.944649] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944654] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=4096, cccid=0 00:14:16.270 [2024-11-28 21:21:39.944659] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fb220) on tqpair(0x18c2540): expected_datao=0, payload_size=4096 00:14:16.270 [2024-11-28 21:21:39.944669] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944674] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.270 [2024-11-28 21:21:39.944705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.270 [2024-11-28 21:21:39.944709] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.270 [2024-11-28 21:21:39.944722] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:16.270 [2024-11-28 21:21:39.944728] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:16.270 [2024-11-28 21:21:39.944733] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:16.270 [2024-11-28 21:21:39.944738] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:16.270 [2024-11-28 21:21:39.944743] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:16.270 [2024-11-28 21:21:39.944748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.944762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.944772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.944797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.944805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.270 [2024-11-28 21:21:39.944828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.270 [2024-11-28 21:21:39.948061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.270 [2024-11-28 21:21:39.948078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.270 [2024-11-28 21:21:39.948084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb220) on tqpair=0x18c2540 00:14:16.270 [2024-11-28 21:21:39.948100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.948117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.270 [2024-11-28 21:21:39.948124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.948138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.270 [2024-11-28 21:21:39.948145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.948159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.270 [2024-11-28 21:21:39.948165] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.948179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.270 [2024-11-28 21:21:39.948185] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.948217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.948226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948234] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.948241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.270 [2024-11-28 21:21:39.948285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb220, cid 0, qid 0 00:14:16.270 [2024-11-28 21:21:39.948293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb380, cid 1, qid 0 00:14:16.270 [2024-11-28 21:21:39.948298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb4e0, cid 2, qid 0 00:14:16.270 [2024-11-28 21:21:39.948303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.270 [2024-11-28 21:21:39.948308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.270 [2024-11-28 21:21:39.948790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.270 [2024-11-28 21:21:39.948807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.270 [2024-11-28 21:21:39.948812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.270 [2024-11-28 21:21:39.948823] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:16.270 [2024-11-28 21:21:39.948830] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.948839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.948851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:16.270 [2024-11-28 21:21:39.948859] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948864] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.270 [2024-11-28 21:21:39.948868] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.270 [2024-11-28 21:21:39.948876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:16.271 [2024-11-28 21:21:39.948897] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.271 [2024-11-28 21:21:39.948961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.271 [2024-11-28 21:21:39.948968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.271 [2024-11-28 21:21:39.948972] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.948976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.271 [2024-11-28 21:21:39.949066] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:16.271 [2024-11-28 21:21:39.949079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:16.271 [2024-11-28 21:21:39.949089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.271 [2024-11-28 21:21:39.949105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.271 [2024-11-28 21:21:39.949128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.271 [2024-11-28 21:21:39.949475] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.271 [2024-11-28 21:21:39.949491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.271 [2024-11-28 21:21:39.949496] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949500] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=4096, cccid=4 00:14:16.271 [2024-11-28 21:21:39.949506] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fb7a0) on tqpair(0x18c2540): expected_datao=0, payload_size=4096 00:14:16.271 [2024-11-28 21:21:39.949515] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949520] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.271 [2024-11-28 21:21:39.949535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.271 [2024-11-28 21:21:39.949539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.271 [2024-11-28 21:21:39.949561] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:16.271 [2024-11-28 21:21:39.949572] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:16.271 [2024-11-28 21:21:39.949584] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:16.271 [2024-11-28 21:21:39.949592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.949601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.271 [2024-11-28 21:21:39.949609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.271 [2024-11-28 21:21:39.949632] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.271 [2024-11-28 21:21:39.949991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.271 [2024-11-28 21:21:39.953055] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.271 [2024-11-28 21:21:39.953072] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953077] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=4096, cccid=4 00:14:16.271 [2024-11-28 21:21:39.953083] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fb7a0) on tqpair(0x18c2540): expected_datao=0, payload_size=4096 00:14:16.271 [2024-11-28 21:21:39.953092] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953097] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.271 [2024-11-28 21:21:39.953115] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.271 [2024-11-28 21:21:39.953119] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953123] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.271 [2024-11-28 21:21:39.953143] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:16.271 [2024-11-28 21:21:39.953157] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:16.271 [2024-11-28 21:21:39.953182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953206] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.271 [2024-11-28 21:21:39.953215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.271 [2024-11-28 21:21:39.953240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.271 [2024-11-28 21:21:39.953355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.271 [2024-11-28 21:21:39.953362] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.271 [2024-11-28 21:21:39.953366] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953369] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=4096, cccid=4 00:14:16.271 [2024-11-28 21:21:39.953374] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fb7a0) on tqpair(0x18c2540): expected_datao=0, payload_size=4096 00:14:16.271 [2024-11-28 21:21:39.953382] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.271 [2024-11-28 21:21:39.953386] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.271 ===================================================== 00:14:16.271 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.271 ===================================================== 00:14:16.271 Controller Capabilities/Features 00:14:16.271 ================================ 00:14:16.271 Vendor ID: 8086 00:14:16.271 Subsystem Vendor ID: 8086 00:14:16.271 Serial Number: SPDK00000000000001 00:14:16.271 Model Number: SPDK bdev Controller 00:14:16.271 Firmware Version: 24.01.1 00:14:16.271 Recommended Arb Burst: 6 00:14:16.271 IEEE OUI Identifier: e4 d2 5c 00:14:16.271 Multi-path I/O 00:14:16.271 May have multiple subsystem ports: Yes 00:14:16.271 May have multiple controllers: Yes 00:14:16.271 Associated with SR-IOV VF: No 00:14:16.271 Max Data Transfer Size: 131072 00:14:16.271 Max Number of Namespaces: 32 00:14:16.271 Max Number of I/O Queues: 127 00:14:16.271 NVMe Specification Version (VS): 1.3 00:14:16.271 NVMe Specification Version (Identify): 1.3 00:14:16.271 Maximum Queue Entries: 128 00:14:16.271 Contiguous Queues Required: Yes 00:14:16.271 Arbitration Mechanisms Supported 00:14:16.271 Weighted Round Robin: Not Supported 00:14:16.271 Vendor Specific: Not Supported 00:14:16.271 Reset Timeout: 15000 ms 00:14:16.271 Doorbell Stride: 4 bytes 00:14:16.271 NVM Subsystem Reset: Not Supported 00:14:16.271 Command Sets Supported 00:14:16.271 NVM Command Set: Supported 00:14:16.271 Boot Partition: Not Supported 00:14:16.271 Memory Page Size Minimum: 4096 bytes 00:14:16.271 Memory Page Size Maximum: 4096 bytes 00:14:16.271 Persistent Memory Region: Not Supported 00:14:16.271 Optional Asynchronous Events Supported 00:14:16.271 Namespace Attribute Notices: Supported 00:14:16.271 Firmware Activation Notices: Not Supported 00:14:16.271 ANA Change Notices: Not Supported 00:14:16.271 PLE Aggregate Log Change Notices: Not Supported 00:14:16.271 LBA Status Info Alert Notices: Not Supported 00:14:16.271 EGE Aggregate Log Change Notices: Not Supported 00:14:16.271 Normal NVM Subsystem Shutdown event: Not Supported 00:14:16.271 Zone Descriptor Change Notices: Not Supported 00:14:16.271 Discovery Log Change Notices: Not Supported 00:14:16.271 Controller Attributes 00:14:16.271 128-bit Host Identifier: Supported 00:14:16.271 Non-Operational Permissive Mode: Not Supported 00:14:16.271 NVM Sets: Not Supported 00:14:16.271 Read Recovery Levels: Not Supported 00:14:16.271 Endurance Groups: Not Supported 00:14:16.271 Predictable Latency Mode: Not Supported 00:14:16.271 Traffic Based Keep ALive: Not Supported 00:14:16.271 Namespace Granularity: Not Supported 00:14:16.271 SQ Associations: Not Supported 00:14:16.271 UUID List: Not Supported 00:14:16.271 Multi-Domain Subsystem: Not Supported 00:14:16.271 Fixed Capacity Management: Not Supported 00:14:16.271 Variable Capacity Management: Not Supported 00:14:16.271 Delete Endurance Group: Not Supported 00:14:16.271 Delete NVM Set: Not Supported 00:14:16.271 Extended LBA Formats Supported: Not Supported 00:14:16.271 Flexible Data Placement Supported: Not Supported 00:14:16.271 00:14:16.271 Controller Memory Buffer Support 00:14:16.271 ================================ 00:14:16.271 Supported: No 00:14:16.271 00:14:16.271 Persistent Memory Region Support 00:14:16.271 ================================ 00:14:16.271 Supported: No 00:14:16.271 00:14:16.271 Admin Command Set Attributes 00:14:16.271 ============================ 00:14:16.271 Security Send/Receive: Not Supported 00:14:16.271 Format NVM: Not Supported 00:14:16.271 Firmware Activate/Download: Not Supported 00:14:16.271 Namespace Management: Not Supported 00:14:16.271 Device Self-Test: Not Supported 00:14:16.271 Directives: Not Supported 00:14:16.271 NVMe-MI: Not Supported 00:14:16.271 Virtualization Management: Not Supported 00:14:16.271 Doorbell Buffer Config: Not Supported 00:14:16.271 Get LBA Status Capability: Not Supported 00:14:16.271 Command & Feature Lockdown Capability: Not Supported 00:14:16.271 Abort Command Limit: 4 00:14:16.271 Async Event Request Limit: 4 00:14:16.272 Number of Firmware Slots: N/A 00:14:16.272 Firmware Slot 1 Read-Only: N/A 00:14:16.272 Firmware Activation Without Reset: N/A 00:14:16.272 Multiple Update Detection Support: N/A 00:14:16.272 Firmware Update Granularity: No Information Provided 00:14:16.272 Per-Namespace SMART Log: No 00:14:16.272 Asymmetric Namespace Access Log Page: Not Supported 00:14:16.272 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:16.272 Command Effects Log Page: Supported 00:14:16.272 Get Log Page Extended Data: Supported 00:14:16.272 Telemetry Log Pages: Not Supported 00:14:16.272 Persistent Event Log Pages: Not Supported 00:14:16.272 Supported Log Pages Log Page: May Support 00:14:16.272 Commands Supported & Effects Log Page: Not Supported 00:14:16.272 Feature Identifiers & Effects Log Page:May Support 00:14:16.272 NVMe-MI Commands & Effects Log Page: May Support 00:14:16.272 Data Area 4 for Telemetry Log: Not Supported 00:14:16.272 Error Log Page Entries Supported: 128 00:14:16.272 Keep Alive: Supported 00:14:16.272 Keep Alive Granularity: 10000 ms 00:14:16.272 00:14:16.272 NVM Command Set Attributes 00:14:16.272 ========================== 00:14:16.272 Submission Queue Entry Size 00:14:16.272 Max: 64 00:14:16.272 Min: 64 00:14:16.272 Completion Queue Entry Size 00:14:16.272 Max: 16 00:14:16.272 Min: 16 00:14:16.272 Number of Namespaces: 32 00:14:16.272 Compare Command: Supported 00:14:16.272 Write Uncorrectable Command: Not Supported 00:14:16.272 Dataset Management Command: Supported 00:14:16.272 Write Zeroes Command: Supported 00:14:16.272 Set Features Save Field: Not Supported 00:14:16.272 Reservations: Supported 00:14:16.272 Timestamp: Not Supported 00:14:16.272 Copy: Supported 00:14:16.272 Volatile Write Cache: Present 00:14:16.272 Atomic Write Unit (Normal): 1 00:14:16.272 Atomic Write Unit (PFail): 1 00:14:16.272 Atomic Compare & Write Unit: 1 00:14:16.272 Fused Compare & Write: Supported 00:14:16.272 Scatter-Gather List 00:14:16.272 SGL Command Set: Supported 00:14:16.272 SGL Keyed: Supported 00:14:16.272 SGL Bit Bucket Descriptor: Not Supported 00:14:16.272 SGL Metadata Pointer: Not Supported 00:14:16.272 Oversized SGL: Not Supported 00:14:16.272 SGL Metadata Address: Not Supported 00:14:16.272 SGL Offset: Supported 00:14:16.272 Transport SGL Data Block: Not Supported 00:14:16.272 Replay Protected Memory Block: Not Supported 00:14:16.272 00:14:16.272 Firmware Slot Information 00:14:16.272 ========================= 00:14:16.272 Active slot: 1 00:14:16.272 Slot 1 Firmware Revision: 24.01.1 00:14:16.272 00:14:16.272 00:14:16.272 Commands Supported and Effects 00:14:16.272 ============================== 00:14:16.272 Admin Commands 00:14:16.272 -------------- 00:14:16.272 Get Log Page (02h): Supported 00:14:16.272 Identify (06h): Supported 00:14:16.272 Abort (08h): Supported 00:14:16.272 Set Features (09h): Supported 00:14:16.272 Get Features (0Ah): Supported 00:14:16.272 Asynchronous Event Request (0Ch): Supported 00:14:16.272 Keep Alive (18h): Supported 00:14:16.272 I/O Commands 00:14:16.272 ------------ 00:14:16.272 Flush (00h): Supported LBA-Change 00:14:16.272 Write (01h): Supported LBA-Change 00:14:16.272 Read (02h): Supported 00:14:16.272 Compare (05h): Supported 00:14:16.272 Write Zeroes (08h): Supported LBA-Change 00:14:16.272 Dataset Management (09h): Supported LBA-Change 00:14:16.272 Copy (19h): Supported LBA-Change 00:14:16.272 Unknown (79h): Supported LBA-Change 00:14:16.272 Unknown (7Ah): Supported 00:14:16.272 00:14:16.272 Error Log 00:14:16.272 ========= 00:14:16.272 00:14:16.272 Arbitration 00:14:16.272 =========== 00:14:16.272 Arbitration Burst: 1 00:14:16.272 00:14:16.272 Power Management 00:14:16.272 ================ 00:14:16.272 Number of Power States: 1 00:14:16.272 Current Power State: Power State #0 00:14:16.272 Power State #0: 00:14:16.272 Max Power: 0.00 W 00:14:16.272 Non-Operational State: Operational 00:14:16.272 Entry Latency: Not Reported 00:14:16.272 Exit Latency: Not Reported 00:14:16.272 Relative Read Throughput: 0 00:14:16.272 Relative Read Latency: 0 00:14:16.272 Relative Write Throughput: 0 00:14:16.272 Relative Write Latency: 0 00:14:16.272 Idle Power: Not Reported 00:14:16.272 Active Power: Not Reported 00:14:16.272 Non-Operational Permissive Mode: Not Supported 00:14:16.272 00:14:16.272 Health Information 00:14:16.272 ================== 00:14:16.272 Critical Warnings: 00:14:16.272 Available Spare Space: OK 00:14:16.272 Temperature: OK 00:14:16.272 Device Reliability: OK 00:14:16.272 Read Only: No 00:14:16.272 Volatile Memory Backup: OK 00:14:16.272 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:16.272 Temperature Threshold: [2024-11-28 21:21:39.953937] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.272 [2024-11-28 21:21:39.953944] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.272 [2024-11-28 21:21:39.953948] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.953953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.272 [2024-11-28 21:21:39.953963] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:16.272 [2024-11-28 21:21:39.953973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:16.272 [2024-11-28 21:21:39.953984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:16.272 [2024-11-28 21:21:39.953991] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:16.272 [2024-11-28 21:21:39.953997] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:16.272 [2024-11-28 21:21:39.954003] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:16.272 [2024-11-28 21:21:39.954008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:16.272 [2024-11-28 21:21:39.954015] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:16.272 [2024-11-28 21:21:39.954031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.272 [2024-11-28 21:21:39.954049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.272 [2024-11-28 21:21:39.954056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c2540) 00:14:16.272 [2024-11-28 21:21:39.954108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.272 [2024-11-28 21:21:39.954137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.272 [2024-11-28 21:21:39.954146] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb900, cid 5, qid 0 00:14:16.272 [2024-11-28 21:21:39.954247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.272 [2024-11-28 21:21:39.954254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.272 [2024-11-28 21:21:39.954258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.272 [2024-11-28 21:21:39.954270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.272 [2024-11-28 21:21:39.954276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.272 [2024-11-28 21:21:39.954279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb900) on tqpair=0x18c2540 00:14:16.272 [2024-11-28 21:21:39.954294] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954299] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c2540) 00:14:16.272 [2024-11-28 21:21:39.954310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.272 [2024-11-28 21:21:39.954330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb900, cid 5, qid 0 00:14:16.272 [2024-11-28 21:21:39.954382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.272 [2024-11-28 21:21:39.954388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.272 [2024-11-28 21:21:39.954392] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954396] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb900) on tqpair=0x18c2540 00:14:16.272 [2024-11-28 21:21:39.954407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.272 [2024-11-28 21:21:39.954416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c2540) 00:14:16.272 [2024-11-28 21:21:39.954423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.272 [2024-11-28 21:21:39.954440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb900, cid 5, qid 0 00:14:16.272 [2024-11-28 21:21:39.954487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.272 [2024-11-28 21:21:39.954494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.272 [2024-11-28 21:21:39.954497] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb900) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.954513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.954528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.954546] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb900, cid 5, qid 0 00:14:16.273 [2024-11-28 21:21:39.954596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.954602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.954606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb900) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.954625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.954641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.954648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.954663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.954670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954674] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.954684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.954693] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954698] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.954708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.954728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb900, cid 5, qid 0 00:14:16.273 [2024-11-28 21:21:39.954735] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb7a0, cid 4, qid 0 00:14:16.273 [2024-11-28 21:21:39.954740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fba60, cid 6, qid 0 00:14:16.273 [2024-11-28 21:21:39.954745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fbbc0, cid 7, qid 0 00:14:16.273 [2024-11-28 21:21:39.954909] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.273 [2024-11-28 21:21:39.954917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.273 [2024-11-28 21:21:39.954921] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954925] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=8192, cccid=5 00:14:16.273 [2024-11-28 21:21:39.954930] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fb900) on tqpair(0x18c2540): expected_datao=0, payload_size=8192 00:14:16.273 [2024-11-28 21:21:39.954948] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954953] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.273 [2024-11-28 21:21:39.954966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.273 [2024-11-28 21:21:39.954970] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954974] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=512, cccid=4 00:14:16.273 [2024-11-28 21:21:39.954979] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fb7a0) on tqpair(0x18c2540): expected_datao=0, payload_size=512 00:14:16.273 [2024-11-28 21:21:39.954987] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954991] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.954997] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.273 [2024-11-28 21:21:39.955003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.273 [2024-11-28 21:21:39.955007] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955011] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=512, cccid=6 00:14:16.273 [2024-11-28 21:21:39.955016] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fba60) on tqpair(0x18c2540): expected_datao=0, payload_size=512 00:14:16.273 [2024-11-28 21:21:39.955024] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955028] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:16.273 [2024-11-28 21:21:39.955040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:16.273 [2024-11-28 21:21:39.955059] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955065] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c2540): datao=0, datal=4096, cccid=7 00:14:16.273 [2024-11-28 21:21:39.955070] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18fbbc0) on tqpair(0x18c2540): expected_datao=0, payload_size=4096 00:14:16.273 [2024-11-28 21:21:39.955078] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955083] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955103] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955107] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb900) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.955125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955137] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955141] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb7a0) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.955163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955179] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fba60) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.955188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fbbc0) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.955321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.955342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.955367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fbbc0, cid 7, qid 0 00:14:16.273 [2024-11-28 21:21:39.955426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fbbc0) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.955494] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:16.273 [2024-11-28 21:21:39.955508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.273 [2024-11-28 21:21:39.955516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.273 [2024-11-28 21:21:39.955537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.273 [2024-11-28 21:21:39.955543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.273 [2024-11-28 21:21:39.955552] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.955568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.955590] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.273 [2024-11-28 21:21:39.955640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955654] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb640) on tqpair=0x18c2540 00:14:16.273 [2024-11-28 21:21:39.955663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.273 [2024-11-28 21:21:39.955671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.273 [2024-11-28 21:21:39.955678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.273 [2024-11-28 21:21:39.955700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.273 [2024-11-28 21:21:39.955770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.273 [2024-11-28 21:21:39.955776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.273 [2024-11-28 21:21:39.955780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.955784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb640) on tqpair=0x18c2540 00:14:16.274 [2024-11-28 21:21:39.955790] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:16.274 [2024-11-28 21:21:39.955795] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:16.274 [2024-11-28 21:21:39.955823] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.955828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.955832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.274 [2024-11-28 21:21:39.955840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.274 [2024-11-28 21:21:39.955858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.274 [2024-11-28 21:21:39.955911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.274 [2024-11-28 21:21:39.955918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.274 [2024-11-28 21:21:39.955922] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.955927] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb640) on tqpair=0x18c2540 00:14:16.274 [2024-11-28 21:21:39.955939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.955944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.955948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.274 [2024-11-28 21:21:39.955956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.274 [2024-11-28 21:21:39.955973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.274 [2024-11-28 21:21:39.956018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.274 [2024-11-28 21:21:39.956025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.274 [2024-11-28 21:21:39.956029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.956033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb640) on tqpair=0x18c2540 00:14:16.274 [2024-11-28 21:21:39.956057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.956064] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.956068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.274 [2024-11-28 21:21:39.956077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.274 [2024-11-28 21:21:39.956098] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.274 [2024-11-28 21:21:39.956977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.274 [2024-11-28 21:21:39.961074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.274 [2024-11-28 21:21:39.961092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.961098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb640) on tqpair=0x18c2540 00:14:16.274 [2024-11-28 21:21:39.961117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.961138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.961142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c2540) 00:14:16.274 [2024-11-28 21:21:39.961167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:16.274 [2024-11-28 21:21:39.961210] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18fb640, cid 3, qid 0 00:14:16.274 [2024-11-28 21:21:39.961273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:16.274 [2024-11-28 21:21:39.961280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:16.274 [2024-11-28 21:21:39.961284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:16.274 [2024-11-28 21:21:39.961287] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18fb640) on tqpair=0x18c2540 00:14:16.274 [2024-11-28 21:21:39.961297] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:14:16.274 0 Kelvin (-273 Celsius) 00:14:16.274 Available Spare: 0% 00:14:16.274 Available Spare Threshold: 0% 00:14:16.274 Life Percentage Used: 0% 00:14:16.274 Data Units Read: 0 00:14:16.274 Data Units Written: 0 00:14:16.274 Host Read Commands: 0 00:14:16.274 Host Write Commands: 0 00:14:16.274 Controller Busy Time: 0 minutes 00:14:16.274 Power Cycles: 0 00:14:16.274 Power On Hours: 0 hours 00:14:16.274 Unsafe Shutdowns: 0 00:14:16.274 Unrecoverable Media Errors: 0 00:14:16.274 Lifetime Error Log Entries: 0 00:14:16.274 Warning Temperature Time: 0 minutes 00:14:16.274 Critical Temperature Time: 0 minutes 00:14:16.274 00:14:16.274 Number of Queues 00:14:16.274 ================ 00:14:16.274 Number of I/O Submission Queues: 127 00:14:16.274 Number of I/O Completion Queues: 127 00:14:16.274 00:14:16.274 Active Namespaces 00:14:16.274 ================= 00:14:16.274 Namespace ID:1 00:14:16.274 Error Recovery Timeout: Unlimited 00:14:16.274 Command Set Identifier: NVM (00h) 00:14:16.274 Deallocate: Supported 00:14:16.274 Deallocated/Unwritten Error: Not Supported 00:14:16.274 Deallocated Read Value: Unknown 00:14:16.274 Deallocate in Write Zeroes: Not Supported 00:14:16.274 Deallocated Guard Field: 0xFFFF 00:14:16.274 Flush: Supported 00:14:16.274 Reservation: Supported 00:14:16.274 Namespace Sharing Capabilities: Multiple Controllers 00:14:16.274 Size (in LBAs): 131072 (0GiB) 00:14:16.274 Capacity (in LBAs): 131072 (0GiB) 00:14:16.274 Utilization (in LBAs): 131072 (0GiB) 00:14:16.274 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:16.274 EUI64: ABCDEF0123456789 00:14:16.274 UUID: 375d4c37-8506-403e-8464-5ffa481fb3c7 00:14:16.274 Thin Provisioning: Not Supported 00:14:16.274 Per-NS Atomic Units: Yes 00:14:16.274 Atomic Boundary Size (Normal): 0 00:14:16.274 Atomic Boundary Size (PFail): 0 00:14:16.274 Atomic Boundary Offset: 0 00:14:16.274 Maximum Single Source Range Length: 65535 00:14:16.274 Maximum Copy Length: 65535 00:14:16.274 Maximum Source Range Count: 1 00:14:16.274 NGUID/EUI64 Never Reused: No 00:14:16.274 Namespace Write Protected: No 00:14:16.274 Number of LBA Formats: 1 00:14:16.274 Current LBA Format: LBA Format #00 00:14:16.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:16.274 00:14:16.274 21:21:39 -- host/identify.sh@51 -- # sync 00:14:16.533 21:21:40 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.533 21:21:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.533 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 21:21:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.533 21:21:40 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:16.533 21:21:40 -- host/identify.sh@56 -- # nvmftestfini 00:14:16.533 21:21:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:16.533 21:21:40 -- nvmf/common.sh@116 -- # sync 00:14:16.533 21:21:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:16.533 21:21:40 -- nvmf/common.sh@119 -- # set +e 00:14:16.533 21:21:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:16.533 21:21:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:16.533 rmmod nvme_tcp 00:14:16.533 rmmod nvme_fabrics 00:14:16.533 rmmod nvme_keyring 00:14:16.533 21:21:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:16.533 21:21:40 -- nvmf/common.sh@123 -- # set -e 00:14:16.533 21:21:40 -- nvmf/common.sh@124 -- # return 0 00:14:16.533 21:21:40 -- nvmf/common.sh@477 -- # '[' -n 79971 ']' 00:14:16.533 21:21:40 -- nvmf/common.sh@478 -- # killprocess 79971 00:14:16.533 21:21:40 -- common/autotest_common.sh@936 -- # '[' -z 79971 ']' 00:14:16.533 21:21:40 -- common/autotest_common.sh@940 -- # kill -0 79971 00:14:16.533 21:21:40 -- common/autotest_common.sh@941 -- # uname 00:14:16.533 21:21:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.533 21:21:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79971 00:14:16.533 killing process with pid 79971 00:14:16.533 21:21:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:16.533 21:21:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:16.533 21:21:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79971' 00:14:16.533 21:21:40 -- common/autotest_common.sh@955 -- # kill 79971 00:14:16.533 [2024-11-28 21:21:40.152053] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:16.533 21:21:40 -- common/autotest_common.sh@960 -- # wait 79971 00:14:16.792 21:21:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.792 21:21:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:16.792 21:21:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:16.792 21:21:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.792 21:21:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:16.793 21:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.793 21:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.793 21:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.793 21:21:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:16.793 00:14:16.793 real 0m2.499s 00:14:16.793 user 0m7.040s 00:14:16.793 sys 0m0.593s 00:14:16.793 21:21:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:16.793 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:14:16.793 ************************************ 00:14:16.793 END TEST nvmf_identify 00:14:16.793 ************************************ 00:14:16.793 21:21:40 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:16.793 21:21:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:16.793 21:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.793 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:14:16.793 ************************************ 00:14:16.793 START TEST nvmf_perf 00:14:16.793 ************************************ 00:14:16.793 21:21:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:16.793 * Looking for test storage... 00:14:16.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:16.793 21:21:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:16.793 21:21:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:16.793 21:21:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:17.087 21:21:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:17.087 21:21:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:17.087 21:21:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:17.087 21:21:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:17.087 21:21:40 -- scripts/common.sh@335 -- # IFS=.-: 00:14:17.087 21:21:40 -- scripts/common.sh@335 -- # read -ra ver1 00:14:17.087 21:21:40 -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.087 21:21:40 -- scripts/common.sh@336 -- # read -ra ver2 00:14:17.087 21:21:40 -- scripts/common.sh@337 -- # local 'op=<' 00:14:17.087 21:21:40 -- scripts/common.sh@339 -- # ver1_l=2 00:14:17.087 21:21:40 -- scripts/common.sh@340 -- # ver2_l=1 00:14:17.087 21:21:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:17.088 21:21:40 -- scripts/common.sh@343 -- # case "$op" in 00:14:17.088 21:21:40 -- scripts/common.sh@344 -- # : 1 00:14:17.088 21:21:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:17.088 21:21:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.088 21:21:40 -- scripts/common.sh@364 -- # decimal 1 00:14:17.088 21:21:40 -- scripts/common.sh@352 -- # local d=1 00:14:17.088 21:21:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.088 21:21:40 -- scripts/common.sh@354 -- # echo 1 00:14:17.088 21:21:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:17.088 21:21:40 -- scripts/common.sh@365 -- # decimal 2 00:14:17.088 21:21:40 -- scripts/common.sh@352 -- # local d=2 00:14:17.088 21:21:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.088 21:21:40 -- scripts/common.sh@354 -- # echo 2 00:14:17.088 21:21:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:17.088 21:21:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:17.088 21:21:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:17.088 21:21:40 -- scripts/common.sh@367 -- # return 0 00:14:17.088 21:21:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.088 21:21:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:17.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.088 --rc genhtml_branch_coverage=1 00:14:17.088 --rc genhtml_function_coverage=1 00:14:17.088 --rc genhtml_legend=1 00:14:17.088 --rc geninfo_all_blocks=1 00:14:17.088 --rc geninfo_unexecuted_blocks=1 00:14:17.088 00:14:17.088 ' 00:14:17.088 21:21:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:17.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.088 --rc genhtml_branch_coverage=1 00:14:17.088 --rc genhtml_function_coverage=1 00:14:17.088 --rc genhtml_legend=1 00:14:17.088 --rc geninfo_all_blocks=1 00:14:17.088 --rc geninfo_unexecuted_blocks=1 00:14:17.088 00:14:17.088 ' 00:14:17.088 21:21:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:17.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.088 --rc genhtml_branch_coverage=1 00:14:17.088 --rc genhtml_function_coverage=1 00:14:17.088 --rc genhtml_legend=1 00:14:17.088 --rc geninfo_all_blocks=1 00:14:17.088 --rc geninfo_unexecuted_blocks=1 00:14:17.088 00:14:17.088 ' 00:14:17.088 21:21:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:17.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.088 --rc genhtml_branch_coverage=1 00:14:17.088 --rc genhtml_function_coverage=1 00:14:17.088 --rc genhtml_legend=1 00:14:17.088 --rc geninfo_all_blocks=1 00:14:17.088 --rc geninfo_unexecuted_blocks=1 00:14:17.088 00:14:17.088 ' 00:14:17.088 21:21:40 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.088 21:21:40 -- nvmf/common.sh@7 -- # uname -s 00:14:17.088 21:21:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.088 21:21:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.088 21:21:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.088 21:21:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.088 21:21:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.088 21:21:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.088 21:21:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.088 21:21:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.088 21:21:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.088 21:21:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.088 21:21:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:14:17.088 21:21:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:14:17.088 21:21:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.088 21:21:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.088 21:21:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.088 21:21:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.088 21:21:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.088 21:21:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.088 21:21:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.088 21:21:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.088 21:21:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.088 21:21:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.088 21:21:40 -- paths/export.sh@5 -- # export PATH 00:14:17.088 21:21:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.088 21:21:40 -- nvmf/common.sh@46 -- # : 0 00:14:17.088 21:21:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:17.088 21:21:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:17.088 21:21:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:17.088 21:21:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.088 21:21:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.088 21:21:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:17.088 21:21:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:17.088 21:21:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:17.088 21:21:40 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:17.088 21:21:40 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:17.088 21:21:40 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.088 21:21:40 -- host/perf.sh@17 -- # nvmftestinit 00:14:17.088 21:21:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:17.088 21:21:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.088 21:21:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:17.088 21:21:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:17.088 21:21:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:17.088 21:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.088 21:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.088 21:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.088 21:21:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:17.088 21:21:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:17.088 21:21:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:17.088 21:21:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:17.088 21:21:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:17.088 21:21:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:17.088 21:21:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.088 21:21:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.088 21:21:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:17.088 21:21:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:17.088 21:21:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.088 21:21:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.088 21:21:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.088 21:21:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.088 21:21:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.088 21:21:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.088 21:21:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.088 21:21:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.088 21:21:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:17.088 21:21:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:17.088 Cannot find device "nvmf_tgt_br" 00:14:17.088 21:21:40 -- nvmf/common.sh@154 -- # true 00:14:17.088 21:21:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.088 Cannot find device "nvmf_tgt_br2" 00:14:17.088 21:21:40 -- nvmf/common.sh@155 -- # true 00:14:17.088 21:21:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:17.088 21:21:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:17.088 Cannot find device "nvmf_tgt_br" 00:14:17.088 21:21:40 -- nvmf/common.sh@157 -- # true 00:14:17.088 21:21:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:17.088 Cannot find device "nvmf_tgt_br2" 00:14:17.088 21:21:40 -- nvmf/common.sh@158 -- # true 00:14:17.088 21:21:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:17.088 21:21:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:17.088 21:21:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.088 21:21:40 -- nvmf/common.sh@161 -- # true 00:14:17.088 21:21:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.088 21:21:40 -- nvmf/common.sh@162 -- # true 00:14:17.088 21:21:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:17.088 21:21:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:17.088 21:21:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:17.088 21:21:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:17.089 21:21:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:17.089 21:21:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:17.348 21:21:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:17.348 21:21:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:17.348 21:21:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:17.348 21:21:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:17.348 21:21:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:17.348 21:21:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:17.348 21:21:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:17.348 21:21:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:17.348 21:21:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:17.348 21:21:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:17.348 21:21:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:17.348 21:21:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:17.348 21:21:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:17.348 21:21:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.348 21:21:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.348 21:21:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.348 21:21:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.348 21:21:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:17.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:17.348 00:14:17.348 --- 10.0.0.2 ping statistics --- 00:14:17.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.348 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:17.348 21:21:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:17.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:14:17.348 00:14:17.348 --- 10.0.0.3 ping statistics --- 00:14:17.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.348 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:17.348 21:21:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:17.348 00:14:17.348 --- 10.0.0.1 ping statistics --- 00:14:17.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.348 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:17.348 21:21:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.348 21:21:40 -- nvmf/common.sh@421 -- # return 0 00:14:17.348 21:21:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:17.348 21:21:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.348 21:21:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:17.348 21:21:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:17.348 21:21:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.348 21:21:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:17.348 21:21:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:17.348 21:21:40 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:17.348 21:21:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:17.348 21:21:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.348 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:14:17.348 21:21:40 -- nvmf/common.sh@469 -- # nvmfpid=80191 00:14:17.348 21:21:40 -- nvmf/common.sh@470 -- # waitforlisten 80191 00:14:17.348 21:21:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.348 21:21:40 -- common/autotest_common.sh@829 -- # '[' -z 80191 ']' 00:14:17.349 21:21:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.349 21:21:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.349 21:21:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.349 21:21:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.349 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:14:17.349 [2024-11-28 21:21:41.029135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:17.349 [2024-11-28 21:21:41.029225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.608 [2024-11-28 21:21:41.169177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.608 [2024-11-28 21:21:41.201563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:17.608 [2024-11-28 21:21:41.202307] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.608 [2024-11-28 21:21:41.202344] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.608 [2024-11-28 21:21:41.202359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.608 [2024-11-28 21:21:41.202540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.608 [2024-11-28 21:21:41.202917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.608 [2024-11-28 21:21:41.203072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.608 [2024-11-28 21:21:41.203283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.544 21:21:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.544 21:21:41 -- common/autotest_common.sh@862 -- # return 0 00:14:18.544 21:21:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:18.544 21:21:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.544 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 21:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.544 21:21:42 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:18.544 21:21:42 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:18.803 21:21:42 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:18.803 21:21:42 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:19.061 21:21:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:19.061 21:21:42 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:19.320 21:21:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:19.320 21:21:42 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:19.320 21:21:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:19.320 21:21:42 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:19.320 21:21:42 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.579 [2024-11-28 21:21:43.188399] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.579 21:21:43 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:19.838 21:21:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:19.838 21:21:43 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.096 21:21:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:20.096 21:21:43 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:20.355 21:21:43 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.614 [2024-11-28 21:21:44.197653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.614 21:21:44 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.873 21:21:44 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:20.873 21:21:44 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:20.873 21:21:44 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:20.873 21:21:44 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:21.823 Initializing NVMe Controllers 00:14:21.823 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:21.823 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:21.823 Initialization complete. Launching workers. 00:14:21.823 ======================================================== 00:14:21.823 Latency(us) 00:14:21.823 Device Information : IOPS MiB/s Average min max 00:14:21.823 PCIE (0000:00:06.0) NSID 1 from core 0: 23480.78 91.72 1362.39 353.29 8179.00 00:14:21.823 ======================================================== 00:14:21.823 Total : 23480.78 91.72 1362.39 353.29 8179.00 00:14:21.823 00:14:21.823 21:21:45 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:23.200 Initializing NVMe Controllers 00:14:23.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:23.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:23.200 Initialization complete. Launching workers. 00:14:23.200 ======================================================== 00:14:23.200 Latency(us) 00:14:23.200 Device Information : IOPS MiB/s Average min max 00:14:23.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3497.97 13.66 285.55 103.64 7140.18 00:14:23.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.35 5083.32 12022.68 00:14:23.200 ======================================================== 00:14:23.200 Total : 3621.97 14.15 553.51 103.64 12022.68 00:14:23.200 00:14:23.201 21:21:46 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:24.602 Initializing NVMe Controllers 00:14:24.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:24.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:24.602 Initialization complete. Launching workers. 00:14:24.602 ======================================================== 00:14:24.602 Latency(us) 00:14:24.602 Device Information : IOPS MiB/s Average min max 00:14:24.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8902.21 34.77 3594.66 432.52 8579.23 00:14:24.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3966.73 15.50 8108.16 4814.21 14448.87 00:14:24.602 ======================================================== 00:14:24.602 Total : 12868.94 50.27 4985.91 432.52 14448.87 00:14:24.602 00:14:24.602 21:21:48 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:24.603 21:21:48 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:27.143 Initializing NVMe Controllers 00:14:27.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.143 Controller IO queue size 128, less than required. 00:14:27.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.143 Controller IO queue size 128, less than required. 00:14:27.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:27.143 Initialization complete. Launching workers. 00:14:27.143 ======================================================== 00:14:27.143 Latency(us) 00:14:27.143 Device Information : IOPS MiB/s Average min max 00:14:27.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1957.82 489.45 66577.50 27664.50 123529.13 00:14:27.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 687.85 171.96 197865.18 109936.72 335019.97 00:14:27.143 ======================================================== 00:14:27.143 Total : 2645.67 661.42 100711.31 27664.50 335019.97 00:14:27.143 00:14:27.143 21:21:50 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:27.403 No valid NVMe controllers or AIO or URING devices found 00:14:27.403 Initializing NVMe Controllers 00:14:27.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.403 Controller IO queue size 128, less than required. 00:14:27.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.403 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:27.403 Controller IO queue size 128, less than required. 00:14:27.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.403 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:27.403 WARNING: Some requested NVMe devices were skipped 00:14:27.403 21:21:51 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:29.935 Initializing NVMe Controllers 00:14:29.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.935 Controller IO queue size 128, less than required. 00:14:29.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:29.935 Controller IO queue size 128, less than required. 00:14:29.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:29.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:29.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:29.935 Initialization complete. Launching workers. 00:14:29.935 00:14:29.935 ==================== 00:14:29.935 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:29.935 TCP transport: 00:14:29.935 polls: 8317 00:14:29.935 idle_polls: 0 00:14:29.935 sock_completions: 8317 00:14:29.935 nvme_completions: 6490 00:14:29.935 submitted_requests: 9888 00:14:29.935 queued_requests: 1 00:14:29.935 00:14:29.935 ==================== 00:14:29.935 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:29.935 TCP transport: 00:14:29.935 polls: 8435 00:14:29.935 idle_polls: 0 00:14:29.935 sock_completions: 8435 00:14:29.935 nvme_completions: 6657 00:14:29.935 submitted_requests: 10075 00:14:29.935 queued_requests: 1 00:14:29.935 ======================================================== 00:14:29.935 Latency(us) 00:14:29.935 Device Information : IOPS MiB/s Average min max 00:14:29.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1682.37 420.59 77259.44 44545.47 133990.86 00:14:29.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1723.79 430.95 74492.52 39706.67 121621.90 00:14:29.935 ======================================================== 00:14:29.935 Total : 3406.16 851.54 75859.16 39706.67 133990.86 00:14:29.935 00:14:29.935 21:21:53 -- host/perf.sh@66 -- # sync 00:14:29.935 21:21:53 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.194 21:21:53 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:30.194 21:21:53 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:30.194 21:21:53 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:30.453 21:21:54 -- host/perf.sh@72 -- # ls_guid=d3362481-d9a4-4897-af2f-583b154a21e9 00:14:30.453 21:21:54 -- host/perf.sh@73 -- # get_lvs_free_mb d3362481-d9a4-4897-af2f-583b154a21e9 00:14:30.453 21:21:54 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d3362481-d9a4-4897-af2f-583b154a21e9 00:14:30.453 21:21:54 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:30.453 21:21:54 -- common/autotest_common.sh@1355 -- # local fc 00:14:30.453 21:21:54 -- common/autotest_common.sh@1356 -- # local cs 00:14:30.453 21:21:54 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:31.021 21:21:54 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:31.021 { 00:14:31.021 "uuid": "d3362481-d9a4-4897-af2f-583b154a21e9", 00:14:31.021 "name": "lvs_0", 00:14:31.021 "base_bdev": "Nvme0n1", 00:14:31.021 "total_data_clusters": 1278, 00:14:31.021 "free_clusters": 1278, 00:14:31.021 "block_size": 4096, 00:14:31.021 "cluster_size": 4194304 00:14:31.021 } 00:14:31.021 ]' 00:14:31.021 21:21:54 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d3362481-d9a4-4897-af2f-583b154a21e9") .free_clusters' 00:14:31.021 21:21:54 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:31.021 21:21:54 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d3362481-d9a4-4897-af2f-583b154a21e9") .cluster_size' 00:14:31.021 5112 00:14:31.021 21:21:54 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:31.021 21:21:54 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:31.021 21:21:54 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:31.021 21:21:54 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:31.021 21:21:54 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3362481-d9a4-4897-af2f-583b154a21e9 lbd_0 5112 00:14:31.291 21:21:54 -- host/perf.sh@80 -- # lb_guid=85a0cc8d-191e-4d1c-8449-a40c5ac368c2 00:14:31.291 21:21:54 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 85a0cc8d-191e-4d1c-8449-a40c5ac368c2 lvs_n_0 00:14:31.561 21:21:55 -- host/perf.sh@83 -- # ls_nested_guid=496c7f28-8b0f-49d6-9d77-80145846d7a9 00:14:31.561 21:21:55 -- host/perf.sh@84 -- # get_lvs_free_mb 496c7f28-8b0f-49d6-9d77-80145846d7a9 00:14:31.561 21:21:55 -- common/autotest_common.sh@1353 -- # local lvs_uuid=496c7f28-8b0f-49d6-9d77-80145846d7a9 00:14:31.561 21:21:55 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:31.561 21:21:55 -- common/autotest_common.sh@1355 -- # local fc 00:14:31.561 21:21:55 -- common/autotest_common.sh@1356 -- # local cs 00:14:31.561 21:21:55 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:31.818 21:21:55 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:31.818 { 00:14:31.818 "uuid": "d3362481-d9a4-4897-af2f-583b154a21e9", 00:14:31.818 "name": "lvs_0", 00:14:31.818 "base_bdev": "Nvme0n1", 00:14:31.818 "total_data_clusters": 1278, 00:14:31.818 "free_clusters": 0, 00:14:31.818 "block_size": 4096, 00:14:31.818 "cluster_size": 4194304 00:14:31.818 }, 00:14:31.818 { 00:14:31.818 "uuid": "496c7f28-8b0f-49d6-9d77-80145846d7a9", 00:14:31.818 "name": "lvs_n_0", 00:14:31.818 "base_bdev": "85a0cc8d-191e-4d1c-8449-a40c5ac368c2", 00:14:31.818 "total_data_clusters": 1276, 00:14:31.818 "free_clusters": 1276, 00:14:31.818 "block_size": 4096, 00:14:31.818 "cluster_size": 4194304 00:14:31.818 } 00:14:31.818 ]' 00:14:31.818 21:21:55 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="496c7f28-8b0f-49d6-9d77-80145846d7a9") .free_clusters' 00:14:32.077 21:21:55 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:32.077 21:21:55 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="496c7f28-8b0f-49d6-9d77-80145846d7a9") .cluster_size' 00:14:32.077 5104 00:14:32.077 21:21:55 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:32.077 21:21:55 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:32.077 21:21:55 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:32.077 21:21:55 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:32.077 21:21:55 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 496c7f28-8b0f-49d6-9d77-80145846d7a9 lbd_nest_0 5104 00:14:32.336 21:21:55 -- host/perf.sh@88 -- # lb_nested_guid=43aabfdd-4e99-4254-8a33-8eb637e3e74a 00:14:32.336 21:21:55 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:32.594 21:21:56 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:32.594 21:21:56 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 43aabfdd-4e99-4254-8a33-8eb637e3e74a 00:14:32.853 21:21:56 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.112 21:21:56 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:33.112 21:21:56 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:33.112 21:21:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:33.112 21:21:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:33.112 21:21:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:33.370 No valid NVMe controllers or AIO or URING devices found 00:14:33.370 Initializing NVMe Controllers 00:14:33.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.370 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:33.370 WARNING: Some requested NVMe devices were skipped 00:14:33.370 21:21:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:33.370 21:21:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:45.599 Initializing NVMe Controllers 00:14:45.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:45.599 Initialization complete. Launching workers. 00:14:45.599 ======================================================== 00:14:45.599 Latency(us) 00:14:45.599 Device Information : IOPS MiB/s Average min max 00:14:45.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 932.30 116.54 1072.25 332.65 8469.33 00:14:45.599 ======================================================== 00:14:45.599 Total : 932.30 116.54 1072.25 332.65 8469.33 00:14:45.599 00:14:45.599 21:22:07 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:45.599 21:22:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:45.599 21:22:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:45.599 No valid NVMe controllers or AIO or URING devices found 00:14:45.599 Initializing NVMe Controllers 00:14:45.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.599 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:45.599 WARNING: Some requested NVMe devices were skipped 00:14:45.599 21:22:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:45.599 21:22:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:55.580 Initializing NVMe Controllers 00:14:55.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:55.580 Initialization complete. Launching workers. 00:14:55.580 ======================================================== 00:14:55.580 Latency(us) 00:14:55.580 Device Information : IOPS MiB/s Average min max 00:14:55.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1353.59 169.20 23656.60 4514.03 67235.06 00:14:55.580 ======================================================== 00:14:55.580 Total : 1353.59 169.20 23656.60 4514.03 67235.06 00:14:55.580 00:14:55.580 21:22:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:55.580 21:22:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:55.580 21:22:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:55.580 No valid NVMe controllers or AIO or URING devices found 00:14:55.580 Initializing NVMe Controllers 00:14:55.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.580 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:55.580 WARNING: Some requested NVMe devices were skipped 00:14:55.580 21:22:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:55.580 21:22:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:05.555 Initializing NVMe Controllers 00:15:05.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.555 Controller IO queue size 128, less than required. 00:15:05.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:05.555 Initialization complete. Launching workers. 00:15:05.555 ======================================================== 00:15:05.555 Latency(us) 00:15:05.555 Device Information : IOPS MiB/s Average min max 00:15:05.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3993.59 499.20 32117.84 10619.42 62678.26 00:15:05.555 ======================================================== 00:15:05.555 Total : 3993.59 499.20 32117.84 10619.42 62678.26 00:15:05.555 00:15:05.555 21:22:28 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.556 21:22:28 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 43aabfdd-4e99-4254-8a33-8eb637e3e74a 00:15:05.556 21:22:29 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:05.814 21:22:29 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 85a0cc8d-191e-4d1c-8449-a40c5ac368c2 00:15:06.074 21:22:29 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:06.333 21:22:29 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:06.333 21:22:29 -- host/perf.sh@114 -- # nvmftestfini 00:15:06.333 21:22:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:06.333 21:22:29 -- nvmf/common.sh@116 -- # sync 00:15:06.333 21:22:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:06.333 21:22:29 -- nvmf/common.sh@119 -- # set +e 00:15:06.333 21:22:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:06.333 21:22:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:06.333 rmmod nvme_tcp 00:15:06.333 rmmod nvme_fabrics 00:15:06.333 rmmod nvme_keyring 00:15:06.333 21:22:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:06.333 21:22:29 -- nvmf/common.sh@123 -- # set -e 00:15:06.333 21:22:29 -- nvmf/common.sh@124 -- # return 0 00:15:06.333 21:22:29 -- nvmf/common.sh@477 -- # '[' -n 80191 ']' 00:15:06.333 21:22:29 -- nvmf/common.sh@478 -- # killprocess 80191 00:15:06.333 21:22:29 -- common/autotest_common.sh@936 -- # '[' -z 80191 ']' 00:15:06.333 21:22:29 -- common/autotest_common.sh@940 -- # kill -0 80191 00:15:06.333 21:22:29 -- common/autotest_common.sh@941 -- # uname 00:15:06.333 21:22:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.333 21:22:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80191 00:15:06.333 killing process with pid 80191 00:15:06.333 21:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.333 21:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.333 21:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80191' 00:15:06.333 21:22:30 -- common/autotest_common.sh@955 -- # kill 80191 00:15:06.333 21:22:30 -- common/autotest_common.sh@960 -- # wait 80191 00:15:06.592 21:22:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.592 21:22:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:06.592 21:22:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:06.592 21:22:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.592 21:22:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:06.592 21:22:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.592 21:22:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.592 21:22:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.592 21:22:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:06.592 00:15:06.592 real 0m49.800s 00:15:06.592 user 3m7.214s 00:15:06.592 sys 0m13.034s 00:15:06.592 21:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:06.592 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 ************************************ 00:15:06.592 END TEST nvmf_perf 00:15:06.592 ************************************ 00:15:06.592 21:22:30 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:06.592 21:22:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:06.592 21:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.592 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 ************************************ 00:15:06.592 START TEST nvmf_fio_host 00:15:06.592 ************************************ 00:15:06.592 21:22:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:06.592 * Looking for test storage... 00:15:06.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:06.851 21:22:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:06.851 21:22:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:06.851 21:22:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:06.851 21:22:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:06.851 21:22:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:06.851 21:22:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:06.851 21:22:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:06.851 21:22:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:06.851 21:22:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:06.851 21:22:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.851 21:22:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:06.851 21:22:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:06.851 21:22:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:06.851 21:22:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:06.851 21:22:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:06.851 21:22:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:06.851 21:22:30 -- scripts/common.sh@344 -- # : 1 00:15:06.851 21:22:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:06.851 21:22:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.851 21:22:30 -- scripts/common.sh@364 -- # decimal 1 00:15:06.851 21:22:30 -- scripts/common.sh@352 -- # local d=1 00:15:06.851 21:22:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.851 21:22:30 -- scripts/common.sh@354 -- # echo 1 00:15:06.851 21:22:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:06.851 21:22:30 -- scripts/common.sh@365 -- # decimal 2 00:15:06.851 21:22:30 -- scripts/common.sh@352 -- # local d=2 00:15:06.851 21:22:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.851 21:22:30 -- scripts/common.sh@354 -- # echo 2 00:15:06.851 21:22:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:06.851 21:22:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:06.851 21:22:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:06.851 21:22:30 -- scripts/common.sh@367 -- # return 0 00:15:06.851 21:22:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.851 21:22:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.851 --rc genhtml_branch_coverage=1 00:15:06.851 --rc genhtml_function_coverage=1 00:15:06.851 --rc genhtml_legend=1 00:15:06.851 --rc geninfo_all_blocks=1 00:15:06.851 --rc geninfo_unexecuted_blocks=1 00:15:06.851 00:15:06.851 ' 00:15:06.851 21:22:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.851 --rc genhtml_branch_coverage=1 00:15:06.851 --rc genhtml_function_coverage=1 00:15:06.852 --rc genhtml_legend=1 00:15:06.852 --rc geninfo_all_blocks=1 00:15:06.852 --rc geninfo_unexecuted_blocks=1 00:15:06.852 00:15:06.852 ' 00:15:06.852 21:22:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:06.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.852 --rc genhtml_branch_coverage=1 00:15:06.852 --rc genhtml_function_coverage=1 00:15:06.852 --rc genhtml_legend=1 00:15:06.852 --rc geninfo_all_blocks=1 00:15:06.852 --rc geninfo_unexecuted_blocks=1 00:15:06.852 00:15:06.852 ' 00:15:06.852 21:22:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:06.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.852 --rc genhtml_branch_coverage=1 00:15:06.852 --rc genhtml_function_coverage=1 00:15:06.852 --rc genhtml_legend=1 00:15:06.852 --rc geninfo_all_blocks=1 00:15:06.852 --rc geninfo_unexecuted_blocks=1 00:15:06.852 00:15:06.852 ' 00:15:06.852 21:22:30 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.852 21:22:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.852 21:22:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.852 21:22:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.852 21:22:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- paths/export.sh@5 -- # export PATH 00:15:06.852 21:22:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.852 21:22:30 -- nvmf/common.sh@7 -- # uname -s 00:15:06.852 21:22:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.852 21:22:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.852 21:22:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.852 21:22:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.852 21:22:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.852 21:22:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.852 21:22:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.852 21:22:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.852 21:22:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.852 21:22:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.852 21:22:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:15:06.852 21:22:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:15:06.852 21:22:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.852 21:22:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.852 21:22:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.852 21:22:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.852 21:22:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.852 21:22:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.852 21:22:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.852 21:22:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- paths/export.sh@5 -- # export PATH 00:15:06.852 21:22:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.852 21:22:30 -- nvmf/common.sh@46 -- # : 0 00:15:06.852 21:22:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:06.852 21:22:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:06.852 21:22:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:06.852 21:22:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.852 21:22:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.852 21:22:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:06.852 21:22:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:06.852 21:22:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:06.852 21:22:30 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.852 21:22:30 -- host/fio.sh@14 -- # nvmftestinit 00:15:06.852 21:22:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:06.852 21:22:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.852 21:22:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:06.852 21:22:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:06.852 21:22:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:06.852 21:22:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.852 21:22:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.852 21:22:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.852 21:22:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:06.852 21:22:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:06.852 21:22:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:06.852 21:22:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:06.852 21:22:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:06.852 21:22:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:06.852 21:22:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.853 21:22:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.853 21:22:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.853 21:22:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:06.853 21:22:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.853 21:22:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.853 21:22:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.853 21:22:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.853 21:22:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.853 21:22:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.853 21:22:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.853 21:22:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.853 21:22:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:06.853 21:22:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:06.853 Cannot find device "nvmf_tgt_br" 00:15:06.853 21:22:30 -- nvmf/common.sh@154 -- # true 00:15:06.853 21:22:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.853 Cannot find device "nvmf_tgt_br2" 00:15:06.853 21:22:30 -- nvmf/common.sh@155 -- # true 00:15:06.853 21:22:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:06.853 21:22:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:06.853 Cannot find device "nvmf_tgt_br" 00:15:06.853 21:22:30 -- nvmf/common.sh@157 -- # true 00:15:06.853 21:22:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:06.853 Cannot find device "nvmf_tgt_br2" 00:15:06.853 21:22:30 -- nvmf/common.sh@158 -- # true 00:15:06.853 21:22:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:07.111 21:22:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:07.111 21:22:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.111 21:22:30 -- nvmf/common.sh@161 -- # true 00:15:07.111 21:22:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.111 21:22:30 -- nvmf/common.sh@162 -- # true 00:15:07.111 21:22:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.111 21:22:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.111 21:22:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.112 21:22:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.112 21:22:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.112 21:22:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.112 21:22:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.112 21:22:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.112 21:22:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:07.112 21:22:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:07.112 21:22:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:07.112 21:22:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:07.112 21:22:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:07.112 21:22:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.112 21:22:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.112 21:22:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.112 21:22:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:07.112 21:22:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:07.112 21:22:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.112 21:22:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.112 21:22:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.112 21:22:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.112 21:22:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.112 21:22:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:07.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:07.112 00:15:07.112 --- 10.0.0.2 ping statistics --- 00:15:07.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.112 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:07.112 21:22:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:07.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:07.112 00:15:07.112 --- 10.0.0.3 ping statistics --- 00:15:07.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.112 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:07.112 21:22:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:07.112 00:15:07.112 --- 10.0.0.1 ping statistics --- 00:15:07.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.112 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:07.112 21:22:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.112 21:22:30 -- nvmf/common.sh@421 -- # return 0 00:15:07.112 21:22:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.112 21:22:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.112 21:22:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:07.112 21:22:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:07.112 21:22:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.112 21:22:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:07.112 21:22:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.112 21:22:30 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:07.112 21:22:30 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:07.112 21:22:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.112 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:15:07.112 21:22:30 -- host/fio.sh@24 -- # nvmfpid=81016 00:15:07.112 21:22:30 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:07.112 21:22:30 -- host/fio.sh@28 -- # waitforlisten 81016 00:15:07.112 21:22:30 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:07.112 21:22:30 -- common/autotest_common.sh@829 -- # '[' -z 81016 ']' 00:15:07.112 21:22:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.112 21:22:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.112 21:22:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.112 21:22:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.112 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:15:07.370 [2024-11-28 21:22:30.879866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:07.370 [2024-11-28 21:22:30.879970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.370 [2024-11-28 21:22:31.011934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.370 [2024-11-28 21:22:31.045431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.370 [2024-11-28 21:22:31.045600] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.370 [2024-11-28 21:22:31.045612] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.370 [2024-11-28 21:22:31.045621] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.370 [2024-11-28 21:22:31.045809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.370 [2024-11-28 21:22:31.046441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.370 [2024-11-28 21:22:31.046595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.370 [2024-11-28 21:22:31.046600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.628 21:22:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.628 21:22:31 -- common/autotest_common.sh@862 -- # return 0 00:15:07.628 21:22:31 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.941 [2024-11-28 21:22:31.406076] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.941 21:22:31 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:07.941 21:22:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.941 21:22:31 -- common/autotest_common.sh@10 -- # set +x 00:15:07.941 21:22:31 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:08.227 Malloc1 00:15:08.227 21:22:31 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.486 21:22:32 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:08.745 21:22:32 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.003 [2024-11-28 21:22:32.571177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.003 21:22:32 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.262 21:22:32 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:09.262 21:22:32 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:09.262 21:22:32 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:09.262 21:22:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:09.262 21:22:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.262 21:22:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:09.262 21:22:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:09.262 21:22:32 -- common/autotest_common.sh@1330 -- # shift 00:15:09.262 21:22:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:09.262 21:22:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:09.262 21:22:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:09.262 21:22:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:09.262 21:22:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:09.262 21:22:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:09.262 21:22:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:09.262 21:22:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:09.521 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:09.521 fio-3.35 00:15:09.521 Starting 1 thread 00:15:12.051 00:15:12.051 test: (groupid=0, jobs=1): err= 0: pid=81087: Thu Nov 28 21:22:35 2024 00:15:12.051 read: IOPS=9311, BW=36.4MiB/s (38.1MB/s)(73.0MiB/2006msec) 00:15:12.051 slat (nsec): min=1935, max=344498, avg=2590.65, stdev=3421.22 00:15:12.051 clat (usec): min=2582, max=12994, avg=7140.80, stdev=573.17 00:15:12.052 lat (usec): min=2623, max=12996, avg=7143.39, stdev=572.98 00:15:12.052 clat percentiles (usec): 00:15:12.052 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6718], 00:15:12.052 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:15:12.052 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8094], 00:15:12.052 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11600], 99.95th=[12518], 00:15:12.052 | 99.99th=[12780] 00:15:12.052 bw ( KiB/s): min=36312, max=37824, per=99.88%, avg=37199.50, stdev=735.45, samples=4 00:15:12.052 iops : min= 9078, max= 9456, avg=9299.75, stdev=183.94, samples=4 00:15:12.052 write: IOPS=9313, BW=36.4MiB/s (38.1MB/s)(73.0MiB/2006msec); 0 zone resets 00:15:12.052 slat (usec): min=2, max=2929, avg= 2.86, stdev=21.56 00:15:12.052 clat (usec): min=2442, max=12964, avg=6538.03, stdev=518.28 00:15:12.052 lat (usec): min=2455, max=12966, avg=6540.89, stdev=518.18 00:15:12.052 clat percentiles (usec): 00:15:12.052 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6128], 00:15:12.052 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:15:12.052 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7373], 00:15:12.052 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[10028], 99.95th=[10814], 00:15:12.052 | 99.99th=[12649] 00:15:12.052 bw ( KiB/s): min=36790, max=37976, per=99.94%, avg=37231.50, stdev=522.88, samples=4 00:15:12.052 iops : min= 9197, max= 9494, avg=9307.75, stdev=130.86, samples=4 00:15:12.052 lat (msec) : 4=0.08%, 10=99.79%, 20=0.13% 00:15:12.052 cpu : usr=69.13%, sys=21.95%, ctx=17, majf=0, minf=5 00:15:12.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:12.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.052 issued rwts: total=18678,18683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.052 00:15:12.052 Run status group 0 (all jobs): 00:15:12.052 READ: bw=36.4MiB/s (38.1MB/s), 36.4MiB/s-36.4MiB/s (38.1MB/s-38.1MB/s), io=73.0MiB (76.5MB), run=2006-2006msec 00:15:12.052 WRITE: bw=36.4MiB/s (38.1MB/s), 36.4MiB/s-36.4MiB/s (38.1MB/s-38.1MB/s), io=73.0MiB (76.5MB), run=2006-2006msec 00:15:12.052 21:22:35 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:12.052 21:22:35 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:12.052 21:22:35 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:12.052 21:22:35 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:12.052 21:22:35 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:12.052 21:22:35 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.052 21:22:35 -- common/autotest_common.sh@1330 -- # shift 00:15:12.052 21:22:35 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:12.052 21:22:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:12.052 21:22:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:12.052 21:22:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:12.052 21:22:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:12.052 21:22:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:12.052 21:22:35 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:12.052 21:22:35 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:12.052 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:12.052 fio-3.35 00:15:12.052 Starting 1 thread 00:15:14.578 00:15:14.578 test: (groupid=0, jobs=1): err= 0: pid=81130: Thu Nov 28 21:22:37 2024 00:15:14.578 read: IOPS=8480, BW=133MiB/s (139MB/s)(266MiB/2006msec) 00:15:14.578 slat (usec): min=2, max=135, avg= 4.01, stdev= 2.85 00:15:14.578 clat (usec): min=1624, max=17066, avg=8071.40, stdev=2401.63 00:15:14.578 lat (usec): min=1628, max=17069, avg=8075.41, stdev=2401.80 00:15:14.578 clat percentiles (usec): 00:15:14.578 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 5932], 00:15:14.579 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8356], 00:15:14.579 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11338], 95.00th=[12387], 00:15:14.579 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16581], 99.95th=[16909], 00:15:14.579 | 99.99th=[17171] 00:15:14.579 bw ( KiB/s): min=65056, max=77344, per=51.79%, avg=70268.50, stdev=5305.96, samples=4 00:15:14.579 iops : min= 4066, max= 4834, avg=4391.75, stdev=331.62, samples=4 00:15:14.579 write: IOPS=4952, BW=77.4MiB/s (81.1MB/s)(143MiB/1844msec); 0 zone resets 00:15:14.579 slat (usec): min=32, max=363, avg=39.73, stdev= 9.95 00:15:14.579 clat (usec): min=3438, max=20124, avg=12101.19, stdev=1922.21 00:15:14.579 lat (usec): min=3475, max=20162, avg=12140.92, stdev=1923.35 00:15:14.579 clat percentiles (usec): 00:15:14.579 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:15:14.579 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:15:14.579 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14746], 95.00th=[15664], 00:15:14.579 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:15:14.579 | 99.99th=[20055] 00:15:14.579 bw ( KiB/s): min=68544, max=80160, per=92.15%, avg=73019.00, stdev=5235.80, samples=4 00:15:14.579 iops : min= 4284, max= 5010, avg=4563.50, stdev=327.21, samples=4 00:15:14.579 lat (msec) : 2=0.02%, 4=0.34%, 10=55.21%, 20=44.42%, 50=0.01% 00:15:14.579 cpu : usr=77.01%, sys=16.56%, ctx=73, majf=0, minf=1 00:15:14.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:14.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.579 issued rwts: total=17012,9132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.579 00:15:14.579 Run status group 0 (all jobs): 00:15:14.579 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2006-2006msec 00:15:14.579 WRITE: bw=77.4MiB/s (81.1MB/s), 77.4MiB/s-77.4MiB/s (81.1MB/s-81.1MB/s), io=143MiB (150MB), run=1844-1844msec 00:15:14.579 21:22:37 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.579 21:22:38 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:14.579 21:22:38 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:14.579 21:22:38 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:14.579 21:22:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:14.579 21:22:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:14.579 21:22:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:14.579 21:22:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:14.579 21:22:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:14.579 21:22:38 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:14.579 21:22:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:14.579 21:22:38 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:14.836 Nvme0n1 00:15:14.836 21:22:38 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:15.094 21:22:38 -- host/fio.sh@53 -- # ls_guid=5aa09a11-e0c1-4eb9-b4e3-ef212eedf130 00:15:15.094 21:22:38 -- host/fio.sh@54 -- # get_lvs_free_mb 5aa09a11-e0c1-4eb9-b4e3-ef212eedf130 00:15:15.094 21:22:38 -- common/autotest_common.sh@1353 -- # local lvs_uuid=5aa09a11-e0c1-4eb9-b4e3-ef212eedf130 00:15:15.094 21:22:38 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:15.094 21:22:38 -- common/autotest_common.sh@1355 -- # local fc 00:15:15.094 21:22:38 -- common/autotest_common.sh@1356 -- # local cs 00:15:15.094 21:22:38 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:15.351 21:22:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:15.351 { 00:15:15.351 "uuid": "5aa09a11-e0c1-4eb9-b4e3-ef212eedf130", 00:15:15.351 "name": "lvs_0", 00:15:15.351 "base_bdev": "Nvme0n1", 00:15:15.351 "total_data_clusters": 4, 00:15:15.351 "free_clusters": 4, 00:15:15.351 "block_size": 4096, 00:15:15.351 "cluster_size": 1073741824 00:15:15.351 } 00:15:15.351 ]' 00:15:15.351 21:22:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="5aa09a11-e0c1-4eb9-b4e3-ef212eedf130") .free_clusters' 00:15:15.351 21:22:39 -- common/autotest_common.sh@1358 -- # fc=4 00:15:15.351 21:22:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="5aa09a11-e0c1-4eb9-b4e3-ef212eedf130") .cluster_size' 00:15:15.608 21:22:39 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:15.608 21:22:39 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:15.608 4096 00:15:15.608 21:22:39 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:15.608 21:22:39 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:15.608 528e310c-3a4d-4d67-8342-65e15ba5119d 00:15:15.866 21:22:39 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:15.866 21:22:39 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:16.124 21:22:39 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:16.382 21:22:40 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:16.382 21:22:40 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:16.382 21:22:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:16.382 21:22:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.382 21:22:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:16.382 21:22:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.382 21:22:40 -- common/autotest_common.sh@1330 -- # shift 00:15:16.382 21:22:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:16.382 21:22:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:16.382 21:22:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:16.382 21:22:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:16.382 21:22:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:16.382 21:22:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:16.382 21:22:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:16.382 21:22:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:16.639 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:16.639 fio-3.35 00:15:16.639 Starting 1 thread 00:15:19.162 00:15:19.162 test: (groupid=0, jobs=1): err= 0: pid=81239: Thu Nov 28 21:22:42 2024 00:15:19.162 read: IOPS=6547, BW=25.6MiB/s (26.8MB/s)(51.4MiB/2008msec) 00:15:19.162 slat (nsec): min=1984, max=320082, avg=2607.99, stdev=3953.62 00:15:19.162 clat (usec): min=2968, max=17584, avg=10205.78, stdev=851.13 00:15:19.162 lat (usec): min=2977, max=17586, avg=10208.39, stdev=850.86 00:15:19.162 clat percentiles (usec): 00:15:19.162 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:15:19.162 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:15:19.162 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:15:19.162 | 99.00th=[12256], 99.50th=[12518], 99.90th=[15139], 99.95th=[16188], 00:15:19.162 | 99.99th=[17433] 00:15:19.162 bw ( KiB/s): min=25165, max=26752, per=99.83%, avg=26145.25, stdev=752.74, samples=4 00:15:19.162 iops : min= 6291, max= 6688, avg=6536.25, stdev=188.29, samples=4 00:15:19.162 write: IOPS=6555, BW=25.6MiB/s (26.9MB/s)(51.4MiB/2008msec); 0 zone resets 00:15:19.162 slat (usec): min=2, max=226, avg= 2.72, stdev= 2.62 00:15:19.162 clat (usec): min=2458, max=16302, avg=9262.04, stdev=801.22 00:15:19.162 lat (usec): min=2472, max=16304, avg=9264.76, stdev=801.08 00:15:19.162 clat percentiles (usec): 00:15:19.162 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8586], 00:15:19.162 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:15:19.162 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:15:19.162 | 99.00th=[11076], 99.50th=[11338], 99.90th=[15008], 99.95th=[15926], 00:15:19.162 | 99.99th=[16319] 00:15:19.162 bw ( KiB/s): min=25984, max=26296, per=99.90%, avg=26196.75, stdev=143.31, samples=4 00:15:19.162 iops : min= 6496, max= 6574, avg=6549.00, stdev=35.72, samples=4 00:15:19.162 lat (msec) : 4=0.06%, 10=62.63%, 20=37.31% 00:15:19.162 cpu : usr=71.65%, sys=21.72%, ctx=8, majf=0, minf=5 00:15:19.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:19.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.162 issued rwts: total=13147,13164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.162 00:15:19.162 Run status group 0 (all jobs): 00:15:19.162 READ: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=51.4MiB (53.8MB), run=2008-2008msec 00:15:19.162 WRITE: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=51.4MiB (53.9MB), run=2008-2008msec 00:15:19.162 21:22:42 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:19.162 21:22:42 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:19.420 21:22:43 -- host/fio.sh@64 -- # ls_nested_guid=9e1a9032-9b4d-4a9a-999e-1460eb1c9afc 00:15:19.420 21:22:43 -- host/fio.sh@65 -- # get_lvs_free_mb 9e1a9032-9b4d-4a9a-999e-1460eb1c9afc 00:15:19.420 21:22:43 -- common/autotest_common.sh@1353 -- # local lvs_uuid=9e1a9032-9b4d-4a9a-999e-1460eb1c9afc 00:15:19.420 21:22:43 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:19.420 21:22:43 -- common/autotest_common.sh@1355 -- # local fc 00:15:19.420 21:22:43 -- common/autotest_common.sh@1356 -- # local cs 00:15:19.420 21:22:43 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:19.678 21:22:43 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:19.678 { 00:15:19.678 "uuid": "5aa09a11-e0c1-4eb9-b4e3-ef212eedf130", 00:15:19.678 "name": "lvs_0", 00:15:19.678 "base_bdev": "Nvme0n1", 00:15:19.678 "total_data_clusters": 4, 00:15:19.678 "free_clusters": 0, 00:15:19.678 "block_size": 4096, 00:15:19.678 "cluster_size": 1073741824 00:15:19.678 }, 00:15:19.678 { 00:15:19.678 "uuid": "9e1a9032-9b4d-4a9a-999e-1460eb1c9afc", 00:15:19.678 "name": "lvs_n_0", 00:15:19.678 "base_bdev": "528e310c-3a4d-4d67-8342-65e15ba5119d", 00:15:19.678 "total_data_clusters": 1022, 00:15:19.678 "free_clusters": 1022, 00:15:19.678 "block_size": 4096, 00:15:19.678 "cluster_size": 4194304 00:15:19.678 } 00:15:19.678 ]' 00:15:19.678 21:22:43 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="9e1a9032-9b4d-4a9a-999e-1460eb1c9afc") .free_clusters' 00:15:19.678 21:22:43 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:19.678 21:22:43 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="9e1a9032-9b4d-4a9a-999e-1460eb1c9afc") .cluster_size' 00:15:19.935 4088 00:15:19.935 21:22:43 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:19.935 21:22:43 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:19.935 21:22:43 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:19.936 21:22:43 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:19.936 56d61ea2-4111-4a8c-87ba-83ccbd9f57d0 00:15:19.936 21:22:43 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:20.193 21:22:43 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:20.451 21:22:44 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:20.709 21:22:44 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:20.709 21:22:44 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:20.709 21:22:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:20.709 21:22:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.709 21:22:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:20.709 21:22:44 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.709 21:22:44 -- common/autotest_common.sh@1330 -- # shift 00:15:20.709 21:22:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:20.709 21:22:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:20.709 21:22:44 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:20.709 21:22:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:20.709 21:22:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:20.982 21:22:44 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:20.982 21:22:44 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:20.982 21:22:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:20.982 21:22:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:20.982 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:20.982 fio-3.35 00:15:20.982 Starting 1 thread 00:15:23.545 00:15:23.545 test: (groupid=0, jobs=1): err= 0: pid=81318: Thu Nov 28 21:22:46 2024 00:15:23.545 read: IOPS=5850, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec) 00:15:23.545 slat (usec): min=2, max=330, avg= 2.69, stdev= 4.06 00:15:23.545 clat (usec): min=3265, max=20414, avg=11451.45, stdev=970.83 00:15:23.545 lat (usec): min=3274, max=20417, avg=11454.14, stdev=970.48 00:15:23.545 clat percentiles (usec): 00:15:23.545 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:15:23.545 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:15:23.545 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:15:23.545 | 99.00th=[13566], 99.50th=[13960], 99.90th=[19006], 99.95th=[20055], 00:15:23.545 | 99.99th=[20317] 00:15:23.545 bw ( KiB/s): min=22560, max=23800, per=99.90%, avg=23378.00, stdev=560.97, samples=4 00:15:23.545 iops : min= 5640, max= 5950, avg=5844.50, stdev=140.24, samples=4 00:15:23.545 write: IOPS=5837, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec); 0 zone resets 00:15:23.545 slat (usec): min=2, max=270, avg= 2.80, stdev= 3.08 00:15:23.545 clat (usec): min=2441, max=19907, avg=10372.95, stdev=914.09 00:15:23.545 lat (usec): min=2454, max=19910, avg=10375.75, stdev=913.91 00:15:23.545 clat percentiles (usec): 00:15:23.545 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:15:23.545 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:15:23.545 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:15:23.545 | 99.00th=[12256], 99.50th=[12649], 99.90th=[17957], 99.95th=[18744], 00:15:23.545 | 99.99th=[19268] 00:15:23.545 bw ( KiB/s): min=23296, max=23368, per=99.91%, avg=23330.00, stdev=39.40, samples=4 00:15:23.545 iops : min= 5824, max= 5842, avg=5832.50, stdev= 9.85, samples=4 00:15:23.545 lat (msec) : 4=0.05%, 10=18.70%, 20=81.23%, 50=0.03% 00:15:23.545 cpu : usr=73.95%, sys=19.87%, ctx=6, majf=0, minf=5 00:15:23.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:23.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.545 issued rwts: total=11753,11728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.545 00:15:23.545 Run status group 0 (all jobs): 00:15:23.545 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:15:23.545 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2009-2009msec 00:15:23.545 21:22:46 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:23.545 21:22:47 -- host/fio.sh@74 -- # sync 00:15:23.545 21:22:47 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:23.803 21:22:47 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:24.060 21:22:47 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:24.317 21:22:47 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:24.575 21:22:48 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:25.510 21:22:49 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:25.510 21:22:49 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:25.510 21:22:49 -- host/fio.sh@86 -- # nvmftestfini 00:15:25.510 21:22:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.510 21:22:49 -- nvmf/common.sh@116 -- # sync 00:15:25.510 21:22:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.510 21:22:49 -- nvmf/common.sh@119 -- # set +e 00:15:25.510 21:22:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.510 21:22:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.510 rmmod nvme_tcp 00:15:25.510 rmmod nvme_fabrics 00:15:25.510 rmmod nvme_keyring 00:15:25.510 21:22:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.510 21:22:49 -- nvmf/common.sh@123 -- # set -e 00:15:25.510 21:22:49 -- nvmf/common.sh@124 -- # return 0 00:15:25.510 21:22:49 -- nvmf/common.sh@477 -- # '[' -n 81016 ']' 00:15:25.510 21:22:49 -- nvmf/common.sh@478 -- # killprocess 81016 00:15:25.510 21:22:49 -- common/autotest_common.sh@936 -- # '[' -z 81016 ']' 00:15:25.510 21:22:49 -- common/autotest_common.sh@940 -- # kill -0 81016 00:15:25.510 21:22:49 -- common/autotest_common.sh@941 -- # uname 00:15:25.510 21:22:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.510 21:22:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81016 00:15:25.510 killing process with pid 81016 00:15:25.510 21:22:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:25.510 21:22:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:25.510 21:22:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81016' 00:15:25.510 21:22:49 -- common/autotest_common.sh@955 -- # kill 81016 00:15:25.510 21:22:49 -- common/autotest_common.sh@960 -- # wait 81016 00:15:25.768 21:22:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.768 21:22:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.768 21:22:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.768 21:22:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.768 21:22:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.768 21:22:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.768 21:22:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.768 21:22:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.768 21:22:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.768 ************************************ 00:15:25.768 END TEST nvmf_fio_host 00:15:25.768 ************************************ 00:15:25.768 00:15:25.768 real 0m19.103s 00:15:25.768 user 1m24.493s 00:15:25.768 sys 0m4.313s 00:15:25.768 21:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:25.768 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:15:25.768 21:22:49 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:25.768 21:22:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.768 21:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.768 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:15:25.768 ************************************ 00:15:25.768 START TEST nvmf_failover 00:15:25.768 ************************************ 00:15:25.768 21:22:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:25.768 * Looking for test storage... 00:15:25.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:25.768 21:22:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:25.768 21:22:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:25.768 21:22:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:26.026 21:22:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:26.026 21:22:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:26.026 21:22:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:26.026 21:22:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:26.026 21:22:49 -- scripts/common.sh@335 -- # IFS=.-: 00:15:26.026 21:22:49 -- scripts/common.sh@335 -- # read -ra ver1 00:15:26.026 21:22:49 -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.026 21:22:49 -- scripts/common.sh@336 -- # read -ra ver2 00:15:26.026 21:22:49 -- scripts/common.sh@337 -- # local 'op=<' 00:15:26.026 21:22:49 -- scripts/common.sh@339 -- # ver1_l=2 00:15:26.026 21:22:49 -- scripts/common.sh@340 -- # ver2_l=1 00:15:26.026 21:22:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:26.026 21:22:49 -- scripts/common.sh@343 -- # case "$op" in 00:15:26.026 21:22:49 -- scripts/common.sh@344 -- # : 1 00:15:26.026 21:22:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:26.026 21:22:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.027 21:22:49 -- scripts/common.sh@364 -- # decimal 1 00:15:26.027 21:22:49 -- scripts/common.sh@352 -- # local d=1 00:15:26.027 21:22:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.027 21:22:49 -- scripts/common.sh@354 -- # echo 1 00:15:26.027 21:22:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:26.027 21:22:49 -- scripts/common.sh@365 -- # decimal 2 00:15:26.027 21:22:49 -- scripts/common.sh@352 -- # local d=2 00:15:26.027 21:22:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.027 21:22:49 -- scripts/common.sh@354 -- # echo 2 00:15:26.027 21:22:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:26.027 21:22:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:26.027 21:22:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:26.027 21:22:49 -- scripts/common.sh@367 -- # return 0 00:15:26.027 21:22:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.027 21:22:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.027 --rc genhtml_branch_coverage=1 00:15:26.027 --rc genhtml_function_coverage=1 00:15:26.027 --rc genhtml_legend=1 00:15:26.027 --rc geninfo_all_blocks=1 00:15:26.027 --rc geninfo_unexecuted_blocks=1 00:15:26.027 00:15:26.027 ' 00:15:26.027 21:22:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.027 --rc genhtml_branch_coverage=1 00:15:26.027 --rc genhtml_function_coverage=1 00:15:26.027 --rc genhtml_legend=1 00:15:26.027 --rc geninfo_all_blocks=1 00:15:26.027 --rc geninfo_unexecuted_blocks=1 00:15:26.027 00:15:26.027 ' 00:15:26.027 21:22:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.027 --rc genhtml_branch_coverage=1 00:15:26.027 --rc genhtml_function_coverage=1 00:15:26.027 --rc genhtml_legend=1 00:15:26.027 --rc geninfo_all_blocks=1 00:15:26.027 --rc geninfo_unexecuted_blocks=1 00:15:26.027 00:15:26.027 ' 00:15:26.027 21:22:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:26.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.027 --rc genhtml_branch_coverage=1 00:15:26.027 --rc genhtml_function_coverage=1 00:15:26.027 --rc genhtml_legend=1 00:15:26.027 --rc geninfo_all_blocks=1 00:15:26.027 --rc geninfo_unexecuted_blocks=1 00:15:26.027 00:15:26.027 ' 00:15:26.027 21:22:49 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.027 21:22:49 -- nvmf/common.sh@7 -- # uname -s 00:15:26.027 21:22:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.027 21:22:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.027 21:22:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.027 21:22:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.027 21:22:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.027 21:22:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.027 21:22:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.027 21:22:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.027 21:22:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.027 21:22:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.027 21:22:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:15:26.027 21:22:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:15:26.027 21:22:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.027 21:22:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.027 21:22:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.027 21:22:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.027 21:22:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.027 21:22:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.027 21:22:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.027 21:22:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.027 21:22:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.027 21:22:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.027 21:22:49 -- paths/export.sh@5 -- # export PATH 00:15:26.027 21:22:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.027 21:22:49 -- nvmf/common.sh@46 -- # : 0 00:15:26.027 21:22:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.027 21:22:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.027 21:22:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.027 21:22:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.027 21:22:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.027 21:22:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.027 21:22:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.027 21:22:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.027 21:22:49 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.027 21:22:49 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.027 21:22:49 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.027 21:22:49 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:26.027 21:22:49 -- host/failover.sh@18 -- # nvmftestinit 00:15:26.027 21:22:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:26.027 21:22:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.027 21:22:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.027 21:22:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.027 21:22:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.027 21:22:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.027 21:22:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.027 21:22:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.027 21:22:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:26.027 21:22:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:26.027 21:22:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:26.027 21:22:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:26.027 21:22:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:26.027 21:22:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:26.027 21:22:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.027 21:22:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.027 21:22:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.027 21:22:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:26.027 21:22:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.027 21:22:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.027 21:22:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.027 21:22:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.027 21:22:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.027 21:22:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.027 21:22:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.027 21:22:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.027 21:22:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:26.027 21:22:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:26.027 Cannot find device "nvmf_tgt_br" 00:15:26.027 21:22:49 -- nvmf/common.sh@154 -- # true 00:15:26.027 21:22:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.027 Cannot find device "nvmf_tgt_br2" 00:15:26.027 21:22:49 -- nvmf/common.sh@155 -- # true 00:15:26.027 21:22:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:26.027 21:22:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:26.027 Cannot find device "nvmf_tgt_br" 00:15:26.027 21:22:49 -- nvmf/common.sh@157 -- # true 00:15:26.027 21:22:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:26.027 Cannot find device "nvmf_tgt_br2" 00:15:26.027 21:22:49 -- nvmf/common.sh@158 -- # true 00:15:26.027 21:22:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:26.027 21:22:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:26.027 21:22:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.027 21:22:49 -- nvmf/common.sh@161 -- # true 00:15:26.027 21:22:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.027 21:22:49 -- nvmf/common.sh@162 -- # true 00:15:26.027 21:22:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.027 21:22:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.027 21:22:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.027 21:22:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.027 21:22:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.286 21:22:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.286 21:22:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.286 21:22:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.286 21:22:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.286 21:22:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:26.286 21:22:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:26.286 21:22:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:26.286 21:22:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:26.286 21:22:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.286 21:22:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.286 21:22:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.286 21:22:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:26.286 21:22:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:26.286 21:22:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.286 21:22:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.286 21:22:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.286 21:22:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.286 21:22:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.286 21:22:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:26.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:26.286 00:15:26.286 --- 10.0.0.2 ping statistics --- 00:15:26.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.286 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:26.286 21:22:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:26.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:26.286 00:15:26.286 --- 10.0.0.3 ping statistics --- 00:15:26.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.286 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:26.286 21:22:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:26.286 00:15:26.286 --- 10.0.0.1 ping statistics --- 00:15:26.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.286 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:26.286 21:22:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.286 21:22:49 -- nvmf/common.sh@421 -- # return 0 00:15:26.286 21:22:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.286 21:22:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.286 21:22:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.286 21:22:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.286 21:22:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.286 21:22:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.286 21:22:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:26.286 21:22:49 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:26.286 21:22:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:26.286 21:22:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.286 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:15:26.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.286 21:22:49 -- nvmf/common.sh@469 -- # nvmfpid=81563 00:15:26.286 21:22:49 -- nvmf/common.sh@470 -- # waitforlisten 81563 00:15:26.286 21:22:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:26.286 21:22:49 -- common/autotest_common.sh@829 -- # '[' -z 81563 ']' 00:15:26.286 21:22:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.286 21:22:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.286 21:22:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.286 21:22:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.286 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:15:26.286 [2024-11-28 21:22:49.985415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:26.287 [2024-11-28 21:22:49.985506] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.545 [2024-11-28 21:22:50.121291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.545 [2024-11-28 21:22:50.155859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.545 [2024-11-28 21:22:50.156304] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.545 [2024-11-28 21:22:50.156376] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.545 [2024-11-28 21:22:50.156602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.545 [2024-11-28 21:22:50.157080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.545 [2024-11-28 21:22:50.157155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.545 [2024-11-28 21:22:50.157160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.480 21:22:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.480 21:22:50 -- common/autotest_common.sh@862 -- # return 0 00:15:27.480 21:22:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.480 21:22:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.480 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:15:27.480 21:22:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.480 21:22:51 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.738 [2024-11-28 21:22:51.256329] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.738 21:22:51 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:27.997 Malloc0 00:15:27.997 21:22:51 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:28.255 21:22:51 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:28.514 21:22:52 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.773 [2024-11-28 21:22:52.289791] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.773 21:22:52 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:28.773 [2024-11-28 21:22:52.514071] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:29.032 21:22:52 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:29.032 [2024-11-28 21:22:52.754242] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:29.032 21:22:52 -- host/failover.sh@31 -- # bdevperf_pid=81626 00:15:29.032 21:22:52 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:29.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.291 21:22:52 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.291 21:22:52 -- host/failover.sh@34 -- # waitforlisten 81626 /var/tmp/bdevperf.sock 00:15:29.291 21:22:52 -- common/autotest_common.sh@829 -- # '[' -z 81626 ']' 00:15:29.291 21:22:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.291 21:22:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.291 21:22:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.291 21:22:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.291 21:22:52 -- common/autotest_common.sh@10 -- # set +x 00:15:30.225 21:22:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.225 21:22:53 -- common/autotest_common.sh@862 -- # return 0 00:15:30.225 21:22:53 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:30.484 NVMe0n1 00:15:30.484 21:22:54 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:30.742 00:15:30.742 21:22:54 -- host/failover.sh@39 -- # run_test_pid=81652 00:15:30.742 21:22:54 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.742 21:22:54 -- host/failover.sh@41 -- # sleep 1 00:15:31.676 21:22:55 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.242 [2024-11-28 21:22:55.679427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 [2024-11-28 21:22:55.679863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad2b0 is same with the state(5) to be set 00:15:32.242 21:22:55 -- host/failover.sh@45 -- # sleep 3 00:15:35.526 21:22:58 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:35.526 00:15:35.526 21:22:59 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:35.784 [2024-11-28 21:22:59.303169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 [2024-11-28 21:22:59.303638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f96b0 is same with the state(5) to be set 00:15:35.784 21:22:59 -- host/failover.sh@50 -- # sleep 3 00:15:39.085 21:23:02 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.085 [2024-11-28 21:23:02.578802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.085 21:23:02 -- host/failover.sh@55 -- # sleep 1 00:15:40.020 21:23:03 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:40.278 [2024-11-28 21:23:03.889307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.278 [2024-11-28 21:23:03.889360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.278 [2024-11-28 21:23:03.889373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 [2024-11-28 21:23:03.889578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b20 is same with the state(5) to be set 00:15:40.279 21:23:03 -- host/failover.sh@59 -- # wait 81652 00:15:46.849 0 00:15:46.849 21:23:09 -- host/failover.sh@61 -- # killprocess 81626 00:15:46.849 21:23:09 -- common/autotest_common.sh@936 -- # '[' -z 81626 ']' 00:15:46.849 21:23:09 -- common/autotest_common.sh@940 -- # kill -0 81626 00:15:46.849 21:23:09 -- common/autotest_common.sh@941 -- # uname 00:15:46.849 21:23:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.849 21:23:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81626 00:15:46.849 killing process with pid 81626 00:15:46.849 21:23:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:46.849 21:23:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:46.849 21:23:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81626' 00:15:46.849 21:23:09 -- common/autotest_common.sh@955 -- # kill 81626 00:15:46.849 21:23:09 -- common/autotest_common.sh@960 -- # wait 81626 00:15:46.849 21:23:09 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.849 [2024-11-28 21:22:52.815831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:46.849 [2024-11-28 21:22:52.815935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81626 ] 00:15:46.849 [2024-11-28 21:22:52.949866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.849 [2024-11-28 21:22:52.984797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.849 Running I/O for 15 seconds... 00:15:46.849 [2024-11-28 21:22:55.679921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.849 [2024-11-28 21:22:55.679974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.849 [2024-11-28 21:22:55.680001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.849 [2024-11-28 21:22:55.680017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.849 [2024-11-28 21:22:55.680048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.680961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.680993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.681006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.681124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.681157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.681329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.850 [2024-11-28 21:22:55.681389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.850 [2024-11-28 21:22:55.681405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.850 [2024-11-28 21:22:55.681435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.681553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.681923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.681979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.681994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.851 [2024-11-28 21:22:55.682492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.851 [2024-11-28 21:22:55.682591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.851 [2024-11-28 21:22:55.682609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.682752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.682791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.682905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.682934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.682962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.682976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.682990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.852 [2024-11-28 21:22:55.683779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.852 [2024-11-28 21:22:55.683864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.852 [2024-11-28 21:22:55.683878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.853 [2024-11-28 21:22:55.683892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.683906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.853 [2024-11-28 21:22:55.683920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.683934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:55.683948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.683963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:55.683976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.683998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:55.684029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:55.684059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:55.684103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbda40 is same with the state(5) to be set 00:15:46.853 [2024-11-28 21:22:55.684135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.853 [2024-11-28 21:22:55.684146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.853 [2024-11-28 21:22:55.684162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127592 len:8 PRP1 0x0 PRP2 0x0 00:15:46.853 [2024-11-28 21:22:55.684176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684222] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfbda40 was disconnected and freed. reset controller. 00:15:46.853 [2024-11-28 21:22:55.684239] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:46.853 [2024-11-28 21:22:55.684293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:55.684316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:55.684344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:55.684386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:55.684412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:55.684425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.853 [2024-11-28 21:22:55.686760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.853 [2024-11-28 21:22:55.686797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf89d40 (9): Bad file descriptor 00:15:46.853 [2024-11-28 21:22:55.723273] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.853 [2024-11-28 21:22:59.303670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:59.303722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.303762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:59.303779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.303794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:59.303808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.303822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.853 [2024-11-28 21:22:59.303835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.303849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89d40 is same with the state(5) to be set 00:15:46.853 [2024-11-28 21:22:59.303927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.303951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.303974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.303990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.853 [2024-11-28 21:22:59.304591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.853 [2024-11-28 21:22:59.304606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.304777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.304806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.304836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.304871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.304930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.304960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.304976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.854 [2024-11-28 21:22:59.305625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.854 [2024-11-28 21:22:59.305655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.854 [2024-11-28 21:22:59.305671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.305715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.305745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.305834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.305936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.305982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.305996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.855 [2024-11-28 21:22:59.306889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.855 [2024-11-28 21:22:59.306920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.855 [2024-11-28 21:22:59.306935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.306950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.306965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.306979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.306996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.856 [2024-11-28 21:22:59.307785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.307966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.307983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.308044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.308058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.308074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:22:59.308100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.308118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8c00 is same with the state(5) to be set 00:15:46.856 [2024-11-28 21:22:59.308135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.856 [2024-11-28 21:22:59.308147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.856 [2024-11-28 21:22:59.308158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:8 PRP1 0x0 PRP2 0x0 00:15:46.856 [2024-11-28 21:22:59.308171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:22:59.308216] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa8c00 was disconnected and freed. reset controller. 00:15:46.856 [2024-11-28 21:22:59.308234] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:46.856 [2024-11-28 21:22:59.308248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.856 [2024-11-28 21:22:59.310856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.856 [2024-11-28 21:22:59.310895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf89d40 (9): Bad file descriptor 00:15:46.856 [2024-11-28 21:22:59.341985] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.856 [2024-11-28 21:23:03.889674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:23:03.889727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.856 [2024-11-28 21:23:03.889754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.856 [2024-11-28 21:23:03.889770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.889800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.889828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.889895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.889925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.889953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.889981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.889996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.857 [2024-11-28 21:23:03.890949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.890993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.891021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.891036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.891049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.891064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.891077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.857 [2024-11-28 21:23:03.891104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.857 [2024-11-28 21:23:03.891120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.891673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.891975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.891988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.892016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.892053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.892085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.892113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.892141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.858 [2024-11-28 21:23:03.892169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.858 [2024-11-28 21:23:03.892197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.858 [2024-11-28 21:23:03.892212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.892225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.892549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.892608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.892653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.892980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.892993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.893059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.893137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.893176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.893372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.893403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.859 [2024-11-28 21:23:03.893432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.859 [2024-11-28 21:23:03.893491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.859 [2024-11-28 21:23:03.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-11-28 21:23:03.893521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-11-28 21:23:03.893558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-11-28 21:23:03.893590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-11-28 21:23:03.893620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-11-28 21:23:03.893650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-11-28 21:23:03.893680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-11-28 21:23:03.893723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-11-28 21:23:03.893753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-11-28 21:23:03.893782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.860 [2024-11-28 21:23:03.893813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.860 [2024-11-28 21:23:03.893843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c970 is same with the state(5) to be set 00:15:46.860 [2024-11-28 21:23:03.893874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.860 [2024-11-28 21:23:03.893885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.860 [2024-11-28 21:23:03.893896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117976 len:8 PRP1 0x0 PRP2 0x0 00:15:46.860 [2024-11-28 21:23:03.893910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.893955] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf8c970 was disconnected and freed. reset controller. 00:15:46.860 [2024-11-28 21:23:03.893972] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:46.860 [2024-11-28 21:23:03.894034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.860 [2024-11-28 21:23:03.894071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.894088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.860 [2024-11-28 21:23:03.894102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.894115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.860 [2024-11-28 21:23:03.894129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.894143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.860 [2024-11-28 21:23:03.894156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.860 [2024-11-28 21:23:03.894168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.860 [2024-11-28 21:23:03.894216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf89d40 (9): Bad file descriptor 00:15:46.860 [2024-11-28 21:23:03.896746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.860 [2024-11-28 21:23:03.928575] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.860 00:15:46.860 Latency(us) 00:15:46.860 [2024-11-28T21:23:10.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.860 [2024-11-28T21:23:10.603Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.860 Verification LBA range: start 0x0 length 0x4000 00:15:46.860 NVMe0n1 : 15.01 13521.43 52.82 328.18 0.00 9223.85 471.04 14477.50 00:15:46.860 [2024-11-28T21:23:10.603Z] =================================================================================================================== 00:15:46.860 [2024-11-28T21:23:10.603Z] Total : 13521.43 52.82 328.18 0.00 9223.85 471.04 14477.50 00:15:46.860 Received shutdown signal, test time was about 15.000000 seconds 00:15:46.860 00:15:46.860 Latency(us) 00:15:46.860 [2024-11-28T21:23:10.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.860 [2024-11-28T21:23:10.603Z] =================================================================================================================== 00:15:46.860 [2024-11-28T21:23:10.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.860 21:23:09 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:46.860 21:23:09 -- host/failover.sh@65 -- # count=3 00:15:46.860 21:23:09 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:46.860 21:23:09 -- host/failover.sh@73 -- # bdevperf_pid=81830 00:15:46.860 21:23:09 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:46.860 21:23:09 -- host/failover.sh@75 -- # waitforlisten 81830 /var/tmp/bdevperf.sock 00:15:46.860 21:23:09 -- common/autotest_common.sh@829 -- # '[' -z 81830 ']' 00:15:46.860 21:23:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.860 21:23:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.860 21:23:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.860 21:23:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.860 21:23:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.119 21:23:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.119 21:23:10 -- common/autotest_common.sh@862 -- # return 0 00:15:47.119 21:23:10 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:47.378 [2024-11-28 21:23:11.002038] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:47.378 21:23:11 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:47.636 [2024-11-28 21:23:11.274349] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:47.636 21:23:11 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.896 NVMe0n1 00:15:47.896 21:23:11 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.155 00:15:48.155 21:23:11 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.723 00:15:48.723 21:23:12 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:48.723 21:23:12 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:48.723 21:23:12 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.982 21:23:12 -- host/failover.sh@87 -- # sleep 3 00:15:52.272 21:23:15 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.272 21:23:15 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:52.272 21:23:15 -- host/failover.sh@90 -- # run_test_pid=81907 00:15:52.272 21:23:15 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.272 21:23:15 -- host/failover.sh@92 -- # wait 81907 00:15:53.726 0 00:15:53.726 21:23:17 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:53.726 [2024-11-28 21:23:09.800871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:53.726 [2024-11-28 21:23:09.800976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81830 ] 00:15:53.726 [2024-11-28 21:23:09.933868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.726 [2024-11-28 21:23:09.969308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.726 [2024-11-28 21:23:12.669854] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:53.726 [2024-11-28 21:23:12.669992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.726 [2024-11-28 21:23:12.670031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.726 [2024-11-28 21:23:12.670054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.726 [2024-11-28 21:23:12.670068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.726 [2024-11-28 21:23:12.670082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.726 [2024-11-28 21:23:12.670095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.726 [2024-11-28 21:23:12.670124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.726 [2024-11-28 21:23:12.670138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.726 [2024-11-28 21:23:12.670151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:53.726 [2024-11-28 21:23:12.670220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:53.726 [2024-11-28 21:23:12.670252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeded40 (9): Bad file descriptor 00:15:53.726 [2024-11-28 21:23:12.678866] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:53.726 Running I/O for 1 seconds... 00:15:53.726 00:15:53.726 Latency(us) 00:15:53.726 [2024-11-28T21:23:17.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.726 [2024-11-28T21:23:17.469Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.726 Verification LBA range: start 0x0 length 0x4000 00:15:53.726 NVMe0n1 : 1.01 13762.57 53.76 0.00 0.00 9249.99 1050.07 15609.48 00:15:53.726 [2024-11-28T21:23:17.469Z] =================================================================================================================== 00:15:53.726 [2024-11-28T21:23:17.469Z] Total : 13762.57 53.76 0.00 0.00 9249.99 1050.07 15609.48 00:15:53.726 21:23:17 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.726 21:23:17 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:53.726 21:23:17 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.983 21:23:17 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.983 21:23:17 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:54.241 21:23:17 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:54.499 21:23:18 -- host/failover.sh@101 -- # sleep 3 00:15:57.784 21:23:21 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.784 21:23:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:57.784 21:23:21 -- host/failover.sh@108 -- # killprocess 81830 00:15:57.784 21:23:21 -- common/autotest_common.sh@936 -- # '[' -z 81830 ']' 00:15:57.784 21:23:21 -- common/autotest_common.sh@940 -- # kill -0 81830 00:15:57.784 21:23:21 -- common/autotest_common.sh@941 -- # uname 00:15:57.784 21:23:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.784 21:23:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81830 00:15:57.784 killing process with pid 81830 00:15:57.784 21:23:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:57.784 21:23:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:57.784 21:23:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81830' 00:15:57.784 21:23:21 -- common/autotest_common.sh@955 -- # kill 81830 00:15:57.784 21:23:21 -- common/autotest_common.sh@960 -- # wait 81830 00:15:58.043 21:23:21 -- host/failover.sh@110 -- # sync 00:15:58.043 21:23:21 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.302 21:23:21 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:58.302 21:23:21 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:58.302 21:23:21 -- host/failover.sh@116 -- # nvmftestfini 00:15:58.302 21:23:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:58.302 21:23:21 -- nvmf/common.sh@116 -- # sync 00:15:58.302 21:23:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:58.302 21:23:21 -- nvmf/common.sh@119 -- # set +e 00:15:58.302 21:23:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:58.302 21:23:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:58.302 rmmod nvme_tcp 00:15:58.302 rmmod nvme_fabrics 00:15:58.302 rmmod nvme_keyring 00:15:58.302 21:23:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:58.302 21:23:21 -- nvmf/common.sh@123 -- # set -e 00:15:58.302 21:23:21 -- nvmf/common.sh@124 -- # return 0 00:15:58.303 21:23:21 -- nvmf/common.sh@477 -- # '[' -n 81563 ']' 00:15:58.303 21:23:21 -- nvmf/common.sh@478 -- # killprocess 81563 00:15:58.303 21:23:21 -- common/autotest_common.sh@936 -- # '[' -z 81563 ']' 00:15:58.303 21:23:21 -- common/autotest_common.sh@940 -- # kill -0 81563 00:15:58.303 21:23:21 -- common/autotest_common.sh@941 -- # uname 00:15:58.303 21:23:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.303 21:23:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81563 00:15:58.303 killing process with pid 81563 00:15:58.303 21:23:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:58.303 21:23:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:58.303 21:23:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81563' 00:15:58.303 21:23:22 -- common/autotest_common.sh@955 -- # kill 81563 00:15:58.303 21:23:22 -- common/autotest_common.sh@960 -- # wait 81563 00:15:58.561 21:23:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:58.561 21:23:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:58.561 21:23:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:58.561 21:23:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.561 21:23:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:58.561 21:23:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.561 21:23:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.561 21:23:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.561 21:23:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:58.561 ************************************ 00:15:58.561 END TEST nvmf_failover 00:15:58.561 ************************************ 00:15:58.561 00:15:58.561 real 0m32.785s 00:15:58.561 user 2m7.482s 00:15:58.561 sys 0m5.423s 00:15:58.561 21:23:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:58.561 21:23:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 21:23:22 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:58.561 21:23:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:58.561 21:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.561 21:23:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 ************************************ 00:15:58.561 START TEST nvmf_discovery 00:15:58.561 ************************************ 00:15:58.561 21:23:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:58.821 * Looking for test storage... 00:15:58.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:58.821 21:23:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:58.821 21:23:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:58.821 21:23:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:58.821 21:23:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:58.821 21:23:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:58.821 21:23:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:58.821 21:23:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:58.821 21:23:22 -- scripts/common.sh@335 -- # IFS=.-: 00:15:58.821 21:23:22 -- scripts/common.sh@335 -- # read -ra ver1 00:15:58.821 21:23:22 -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.821 21:23:22 -- scripts/common.sh@336 -- # read -ra ver2 00:15:58.821 21:23:22 -- scripts/common.sh@337 -- # local 'op=<' 00:15:58.821 21:23:22 -- scripts/common.sh@339 -- # ver1_l=2 00:15:58.821 21:23:22 -- scripts/common.sh@340 -- # ver2_l=1 00:15:58.821 21:23:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:58.821 21:23:22 -- scripts/common.sh@343 -- # case "$op" in 00:15:58.821 21:23:22 -- scripts/common.sh@344 -- # : 1 00:15:58.821 21:23:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:58.821 21:23:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.821 21:23:22 -- scripts/common.sh@364 -- # decimal 1 00:15:58.821 21:23:22 -- scripts/common.sh@352 -- # local d=1 00:15:58.821 21:23:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.821 21:23:22 -- scripts/common.sh@354 -- # echo 1 00:15:58.821 21:23:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:58.821 21:23:22 -- scripts/common.sh@365 -- # decimal 2 00:15:58.821 21:23:22 -- scripts/common.sh@352 -- # local d=2 00:15:58.821 21:23:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.821 21:23:22 -- scripts/common.sh@354 -- # echo 2 00:15:58.821 21:23:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:58.821 21:23:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:58.821 21:23:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:58.821 21:23:22 -- scripts/common.sh@367 -- # return 0 00:15:58.821 21:23:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.821 21:23:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:58.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.821 --rc genhtml_branch_coverage=1 00:15:58.821 --rc genhtml_function_coverage=1 00:15:58.821 --rc genhtml_legend=1 00:15:58.821 --rc geninfo_all_blocks=1 00:15:58.821 --rc geninfo_unexecuted_blocks=1 00:15:58.821 00:15:58.821 ' 00:15:58.821 21:23:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:58.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.821 --rc genhtml_branch_coverage=1 00:15:58.821 --rc genhtml_function_coverage=1 00:15:58.821 --rc genhtml_legend=1 00:15:58.821 --rc geninfo_all_blocks=1 00:15:58.821 --rc geninfo_unexecuted_blocks=1 00:15:58.821 00:15:58.821 ' 00:15:58.821 21:23:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:58.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.821 --rc genhtml_branch_coverage=1 00:15:58.821 --rc genhtml_function_coverage=1 00:15:58.821 --rc genhtml_legend=1 00:15:58.821 --rc geninfo_all_blocks=1 00:15:58.821 --rc geninfo_unexecuted_blocks=1 00:15:58.821 00:15:58.821 ' 00:15:58.821 21:23:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:58.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.821 --rc genhtml_branch_coverage=1 00:15:58.821 --rc genhtml_function_coverage=1 00:15:58.821 --rc genhtml_legend=1 00:15:58.821 --rc geninfo_all_blocks=1 00:15:58.821 --rc geninfo_unexecuted_blocks=1 00:15:58.821 00:15:58.821 ' 00:15:58.821 21:23:22 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.821 21:23:22 -- nvmf/common.sh@7 -- # uname -s 00:15:58.821 21:23:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.821 21:23:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.821 21:23:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.821 21:23:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.821 21:23:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.821 21:23:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.821 21:23:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.821 21:23:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.821 21:23:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.821 21:23:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.821 21:23:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:15:58.821 21:23:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:15:58.821 21:23:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.821 21:23:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.821 21:23:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.822 21:23:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.822 21:23:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.822 21:23:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.822 21:23:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.822 21:23:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.822 21:23:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.822 21:23:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.822 21:23:22 -- paths/export.sh@5 -- # export PATH 00:15:58.822 21:23:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.822 21:23:22 -- nvmf/common.sh@46 -- # : 0 00:15:58.822 21:23:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:58.822 21:23:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:58.822 21:23:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:58.822 21:23:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.822 21:23:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.822 21:23:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:58.822 21:23:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:58.822 21:23:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:58.822 21:23:22 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:58.822 21:23:22 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:58.822 21:23:22 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:58.822 21:23:22 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:58.822 21:23:22 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:58.822 21:23:22 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:58.822 21:23:22 -- host/discovery.sh@25 -- # nvmftestinit 00:15:58.822 21:23:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:58.822 21:23:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.822 21:23:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:58.822 21:23:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:58.822 21:23:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:58.822 21:23:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.822 21:23:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.822 21:23:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.822 21:23:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:58.822 21:23:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:58.822 21:23:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:58.822 21:23:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:58.822 21:23:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:58.822 21:23:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:58.822 21:23:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.822 21:23:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.822 21:23:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.822 21:23:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:58.822 21:23:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.822 21:23:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.822 21:23:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.822 21:23:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.822 21:23:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.822 21:23:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.822 21:23:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.822 21:23:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.822 21:23:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:58.822 21:23:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:58.822 Cannot find device "nvmf_tgt_br" 00:15:58.822 21:23:22 -- nvmf/common.sh@154 -- # true 00:15:58.822 21:23:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.822 Cannot find device "nvmf_tgt_br2" 00:15:58.822 21:23:22 -- nvmf/common.sh@155 -- # true 00:15:58.822 21:23:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:58.822 21:23:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:58.822 Cannot find device "nvmf_tgt_br" 00:15:58.822 21:23:22 -- nvmf/common.sh@157 -- # true 00:15:58.822 21:23:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:58.822 Cannot find device "nvmf_tgt_br2" 00:15:58.822 21:23:22 -- nvmf/common.sh@158 -- # true 00:15:58.822 21:23:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:59.081 21:23:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:59.081 21:23:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.081 21:23:22 -- nvmf/common.sh@161 -- # true 00:15:59.081 21:23:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.081 21:23:22 -- nvmf/common.sh@162 -- # true 00:15:59.081 21:23:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.081 21:23:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.081 21:23:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.081 21:23:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.081 21:23:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.081 21:23:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.081 21:23:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.081 21:23:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:59.081 21:23:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:59.081 21:23:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:59.081 21:23:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:59.081 21:23:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:59.081 21:23:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:59.081 21:23:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.081 21:23:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.081 21:23:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.081 21:23:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:59.081 21:23:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:59.081 21:23:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.081 21:23:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.081 21:23:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.081 21:23:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.081 21:23:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.081 21:23:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:59.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:59.081 00:15:59.081 --- 10.0.0.2 ping statistics --- 00:15:59.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.081 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:59.081 21:23:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:59.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:59.081 00:15:59.081 --- 10.0.0.3 ping statistics --- 00:15:59.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.081 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:59.081 21:23:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:59.081 00:15:59.081 --- 10.0.0.1 ping statistics --- 00:15:59.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.081 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:59.081 21:23:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.081 21:23:22 -- nvmf/common.sh@421 -- # return 0 00:15:59.081 21:23:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:59.081 21:23:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.081 21:23:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:59.081 21:23:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:59.081 21:23:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.081 21:23:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:59.081 21:23:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:59.081 21:23:22 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:59.081 21:23:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:59.081 21:23:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:59.081 21:23:22 -- common/autotest_common.sh@10 -- # set +x 00:15:59.081 21:23:22 -- nvmf/common.sh@469 -- # nvmfpid=82185 00:15:59.081 21:23:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:59.081 21:23:22 -- nvmf/common.sh@470 -- # waitforlisten 82185 00:15:59.081 21:23:22 -- common/autotest_common.sh@829 -- # '[' -z 82185 ']' 00:15:59.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.081 21:23:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.081 21:23:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.082 21:23:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.082 21:23:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.082 21:23:22 -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 [2024-11-28 21:23:22.852556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:59.341 [2024-11-28 21:23:22.852645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.341 [2024-11-28 21:23:22.990386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.341 [2024-11-28 21:23:23.022704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:59.341 [2024-11-28 21:23:23.022852] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.341 [2024-11-28 21:23:23.022866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.341 [2024-11-28 21:23:23.022873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.341 [2024-11-28 21:23:23.022901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.277 21:23:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.277 21:23:23 -- common/autotest_common.sh@862 -- # return 0 00:16:00.277 21:23:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:00.277 21:23:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.277 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.277 21:23:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.277 21:23:23 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.277 21:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.277 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.277 [2024-11-28 21:23:23.894033] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.277 21:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.277 21:23:23 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:00.277 21:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.277 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.277 [2024-11-28 21:23:23.902179] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:00.277 21:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.277 21:23:23 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:00.277 21:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.277 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.277 null0 00:16:00.277 21:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.277 21:23:23 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:00.277 21:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.277 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.277 null1 00:16:00.277 21:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.277 21:23:23 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:00.277 21:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.277 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.277 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:00.277 21:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.277 21:23:23 -- host/discovery.sh@45 -- # hostpid=82217 00:16:00.277 21:23:23 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:00.277 21:23:23 -- host/discovery.sh@46 -- # waitforlisten 82217 /tmp/host.sock 00:16:00.277 21:23:23 -- common/autotest_common.sh@829 -- # '[' -z 82217 ']' 00:16:00.277 21:23:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:00.277 21:23:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.277 21:23:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:00.278 21:23:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.278 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:16:00.278 [2024-11-28 21:23:23.972276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:00.278 [2024-11-28 21:23:23.972550] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82217 ] 00:16:00.536 [2024-11-28 21:23:24.105464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.536 [2024-11-28 21:23:24.140748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.536 [2024-11-28 21:23:24.141193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.477 21:23:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.477 21:23:24 -- common/autotest_common.sh@862 -- # return 0 00:16:01.477 21:23:24 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.477 21:23:24 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:01.477 21:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.477 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.477 21:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.477 21:23:24 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:01.477 21:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.477 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.477 21:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.477 21:23:24 -- host/discovery.sh@72 -- # notify_id=0 00:16:01.477 21:23:24 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:01.477 21:23:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.477 21:23:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.477 21:23:24 -- host/discovery.sh@59 -- # sort 00:16:01.477 21:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.477 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.477 21:23:24 -- host/discovery.sh@59 -- # xargs 00:16:01.477 21:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.477 21:23:24 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:01.477 21:23:24 -- host/discovery.sh@79 -- # get_bdev_list 00:16:01.477 21:23:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.477 21:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.477 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.477 21:23:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.477 21:23:24 -- host/discovery.sh@55 -- # sort 00:16:01.477 21:23:24 -- host/discovery.sh@55 -- # xargs 00:16:01.477 21:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.477 21:23:25 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:01.477 21:23:25 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:01.477 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.477 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.477 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.477 21:23:25 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:01.477 21:23:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.477 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # sort 00:16:01.478 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # xargs 00:16:01.478 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.478 21:23:25 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:01.478 21:23:25 -- host/discovery.sh@83 -- # get_bdev_list 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # sort 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # xargs 00:16:01.478 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.478 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.478 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.478 21:23:25 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:01.478 21:23:25 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:01.478 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.478 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.478 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.478 21:23:25 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # sort 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.478 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.478 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.478 21:23:25 -- host/discovery.sh@59 -- # xargs 00:16:01.478 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.478 21:23:25 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:01.478 21:23:25 -- host/discovery.sh@87 -- # get_bdev_list 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.478 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # sort 00:16:01.478 21:23:25 -- host/discovery.sh@55 -- # xargs 00:16:01.478 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.478 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:01.738 21:23:25 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:01.738 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.738 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.738 [2024-11-28 21:23:25.254618] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.738 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:01.738 21:23:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.738 21:23:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.738 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.738 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.738 21:23:25 -- host/discovery.sh@59 -- # xargs 00:16:01.738 21:23:25 -- host/discovery.sh@59 -- # sort 00:16:01.738 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:01.738 21:23:25 -- host/discovery.sh@93 -- # get_bdev_list 00:16:01.738 21:23:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.738 21:23:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.738 21:23:25 -- host/discovery.sh@55 -- # sort 00:16:01.738 21:23:25 -- host/discovery.sh@55 -- # xargs 00:16:01.738 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.738 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.738 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:01.738 21:23:25 -- host/discovery.sh@94 -- # get_notification_count 00:16:01.738 21:23:25 -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.738 21:23:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.738 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.738 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.738 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@74 -- # notification_count=0 00:16:01.738 21:23:25 -- host/discovery.sh@75 -- # notify_id=0 00:16:01.738 21:23:25 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:01.738 21:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.738 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:16:01.738 21:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.738 21:23:25 -- host/discovery.sh@100 -- # sleep 1 00:16:02.307 [2024-11-28 21:23:25.907328] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:02.307 [2024-11-28 21:23:25.907364] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:02.307 [2024-11-28 21:23:25.907385] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:02.307 [2024-11-28 21:23:25.913378] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:02.307 [2024-11-28 21:23:25.969202] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:02.307 [2024-11-28 21:23:25.969395] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:02.874 21:23:26 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:02.874 21:23:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.874 21:23:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.874 21:23:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.874 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:16:02.874 21:23:26 -- host/discovery.sh@59 -- # sort 00:16:02.874 21:23:26 -- host/discovery.sh@59 -- # xargs 00:16:02.874 21:23:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.874 21:23:26 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.874 21:23:26 -- host/discovery.sh@102 -- # get_bdev_list 00:16:02.874 21:23:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.874 21:23:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.874 21:23:26 -- host/discovery.sh@55 -- # sort 00:16:02.874 21:23:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.874 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:16:02.874 21:23:26 -- host/discovery.sh@55 -- # xargs 00:16:02.874 21:23:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.874 21:23:26 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:02.874 21:23:26 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:02.874 21:23:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.874 21:23:26 -- host/discovery.sh@63 -- # sort -n 00:16:02.874 21:23:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.874 21:23:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.874 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:16:02.874 21:23:26 -- host/discovery.sh@63 -- # xargs 00:16:02.874 21:23:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.874 21:23:26 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:02.874 21:23:26 -- host/discovery.sh@104 -- # get_notification_count 00:16:02.874 21:23:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:02.874 21:23:26 -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.874 21:23:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.874 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:16:02.874 21:23:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.132 21:23:26 -- host/discovery.sh@74 -- # notification_count=1 00:16:03.132 21:23:26 -- host/discovery.sh@75 -- # notify_id=1 00:16:03.132 21:23:26 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:03.132 21:23:26 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:03.132 21:23:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.132 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.132 21:23:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.132 21:23:26 -- host/discovery.sh@109 -- # sleep 1 00:16:04.070 21:23:27 -- host/discovery.sh@110 -- # get_bdev_list 00:16:04.070 21:23:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.070 21:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.070 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.071 21:23:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.071 21:23:27 -- host/discovery.sh@55 -- # sort 00:16:04.071 21:23:27 -- host/discovery.sh@55 -- # xargs 00:16:04.071 21:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.071 21:23:27 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.071 21:23:27 -- host/discovery.sh@111 -- # get_notification_count 00:16:04.071 21:23:27 -- host/discovery.sh@74 -- # jq '. | length' 00:16:04.071 21:23:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:04.071 21:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.071 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.071 21:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.071 21:23:27 -- host/discovery.sh@74 -- # notification_count=1 00:16:04.071 21:23:27 -- host/discovery.sh@75 -- # notify_id=2 00:16:04.071 21:23:27 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:04.071 21:23:27 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:04.071 21:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.071 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.071 [2024-11-28 21:23:27.785488] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:04.071 [2024-11-28 21:23:27.786668] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:04.071 [2024-11-28 21:23:27.786703] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:04.071 21:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.071 21:23:27 -- host/discovery.sh@117 -- # sleep 1 00:16:04.071 [2024-11-28 21:23:27.792659] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:04.330 [2024-11-28 21:23:27.850904] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:04.330 [2024-11-28 21:23:27.850928] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:04.330 [2024-11-28 21:23:27.850935] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:05.339 21:23:28 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:05.339 21:23:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.339 21:23:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.339 21:23:28 -- common/autotest_common.sh@10 -- # set +x 00:16:05.339 21:23:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.339 21:23:28 -- host/discovery.sh@59 -- # sort 00:16:05.339 21:23:28 -- host/discovery.sh@59 -- # xargs 00:16:05.339 21:23:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.339 21:23:28 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.339 21:23:28 -- host/discovery.sh@119 -- # get_bdev_list 00:16:05.339 21:23:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.339 21:23:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.339 21:23:28 -- common/autotest_common.sh@10 -- # set +x 00:16:05.339 21:23:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.339 21:23:28 -- host/discovery.sh@55 -- # sort 00:16:05.339 21:23:28 -- host/discovery.sh@55 -- # xargs 00:16:05.339 21:23:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.339 21:23:28 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:05.339 21:23:28 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:05.339 21:23:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:05.339 21:23:28 -- host/discovery.sh@63 -- # sort -n 00:16:05.339 21:23:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:05.339 21:23:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.339 21:23:28 -- host/discovery.sh@63 -- # xargs 00:16:05.339 21:23:28 -- common/autotest_common.sh@10 -- # set +x 00:16:05.339 21:23:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.339 21:23:28 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:05.339 21:23:28 -- host/discovery.sh@121 -- # get_notification_count 00:16:05.339 21:23:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:05.339 21:23:28 -- host/discovery.sh@74 -- # jq '. | length' 00:16:05.339 21:23:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.339 21:23:28 -- common/autotest_common.sh@10 -- # set +x 00:16:05.339 21:23:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.339 21:23:29 -- host/discovery.sh@74 -- # notification_count=0 00:16:05.339 21:23:29 -- host/discovery.sh@75 -- # notify_id=2 00:16:05.339 21:23:29 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:05.339 21:23:29 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:05.339 21:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.339 21:23:29 -- common/autotest_common.sh@10 -- # set +x 00:16:05.339 [2024-11-28 21:23:29.016389] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:05.339 [2024-11-28 21:23:29.016454] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:05.339 [2024-11-28 21:23:29.017181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.339 [2024-11-28 21:23:29.017211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.339 [2024-11-28 21:23:29.017224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.339 [2024-11-28 21:23:29.017234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.339 [2024-11-28 21:23:29.017244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.339 [2024-11-28 21:23:29.017253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.339 [2024-11-28 21:23:29.017263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.339 [2024-11-28 21:23:29.017272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.339 [2024-11-28 21:23:29.017281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1359150 is same with the state(5) to be set 00:16:05.339 21:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.339 21:23:29 -- host/discovery.sh@127 -- # sleep 1 00:16:05.339 [2024-11-28 21:23:29.022356] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:05.339 [2024-11-28 21:23:29.022410] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:05.339 [2024-11-28 21:23:29.022485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359150 (9): Bad file descriptor 00:16:06.717 21:23:30 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:06.718 21:23:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:06.718 21:23:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:06.718 21:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.718 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 21:23:30 -- host/discovery.sh@59 -- # xargs 00:16:06.718 21:23:30 -- host/discovery.sh@59 -- # sort 00:16:06.718 21:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@129 -- # get_bdev_list 00:16:06.718 21:23:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.718 21:23:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.718 21:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.718 21:23:30 -- host/discovery.sh@55 -- # xargs 00:16:06.718 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 21:23:30 -- host/discovery.sh@55 -- # sort 00:16:06.718 21:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:06.718 21:23:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:06.718 21:23:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:06.718 21:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.718 21:23:30 -- host/discovery.sh@63 -- # xargs 00:16:06.718 21:23:30 -- host/discovery.sh@63 -- # sort -n 00:16:06.718 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 21:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@131 -- # get_notification_count 00:16:06.718 21:23:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:06.718 21:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.718 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 21:23:30 -- host/discovery.sh@74 -- # jq '. | length' 00:16:06.718 21:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@74 -- # notification_count=0 00:16:06.718 21:23:30 -- host/discovery.sh@75 -- # notify_id=2 00:16:06.718 21:23:30 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:06.718 21:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.718 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 21:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.718 21:23:30 -- host/discovery.sh@135 -- # sleep 1 00:16:07.656 21:23:31 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:07.656 21:23:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:07.656 21:23:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:07.656 21:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.656 21:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:07.656 21:23:31 -- host/discovery.sh@59 -- # sort 00:16:07.656 21:23:31 -- host/discovery.sh@59 -- # xargs 00:16:07.656 21:23:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.656 21:23:31 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:07.656 21:23:31 -- host/discovery.sh@137 -- # get_bdev_list 00:16:07.656 21:23:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.656 21:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.656 21:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:07.656 21:23:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.656 21:23:31 -- host/discovery.sh@55 -- # sort 00:16:07.656 21:23:31 -- host/discovery.sh@55 -- # xargs 00:16:07.656 21:23:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.656 21:23:31 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:07.656 21:23:31 -- host/discovery.sh@138 -- # get_notification_count 00:16:07.656 21:23:31 -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.656 21:23:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:07.656 21:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.656 21:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:07.915 21:23:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.915 21:23:31 -- host/discovery.sh@74 -- # notification_count=2 00:16:07.915 21:23:31 -- host/discovery.sh@75 -- # notify_id=4 00:16:07.915 21:23:31 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:07.915 21:23:31 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.915 21:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.915 21:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 [2024-11-28 21:23:32.460096] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:08.852 [2024-11-28 21:23:32.460141] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:08.852 [2024-11-28 21:23:32.460162] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:08.852 [2024-11-28 21:23:32.466141] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:08.852 [2024-11-28 21:23:32.525405] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:08.852 [2024-11-28 21:23:32.525468] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:08.852 21:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.852 21:23:32 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:08.852 21:23:32 -- common/autotest_common.sh@650 -- # local es=0 00:16:08.852 21:23:32 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:08.852 21:23:32 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:08.852 21:23:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.852 21:23:32 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:08.852 21:23:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.852 21:23:32 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:08.852 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.852 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 request: 00:16:08.852 { 00:16:08.852 "name": "nvme", 00:16:08.852 "trtype": "tcp", 00:16:08.852 "traddr": "10.0.0.2", 00:16:08.852 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:08.852 "adrfam": "ipv4", 00:16:08.852 "trsvcid": "8009", 00:16:08.852 "wait_for_attach": true, 00:16:08.852 "method": "bdev_nvme_start_discovery", 00:16:08.852 "req_id": 1 00:16:08.852 } 00:16:08.852 Got JSON-RPC error response 00:16:08.852 response: 00:16:08.852 { 00:16:08.852 "code": -17, 00:16:08.852 "message": "File exists" 00:16:08.852 } 00:16:08.852 21:23:32 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:08.852 21:23:32 -- common/autotest_common.sh@653 -- # es=1 00:16:08.852 21:23:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.852 21:23:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.852 21:23:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.852 21:23:32 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:08.852 21:23:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:08.852 21:23:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:08.852 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.852 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 21:23:32 -- host/discovery.sh@67 -- # sort 00:16:08.852 21:23:32 -- host/discovery.sh@67 -- # xargs 00:16:08.852 21:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.111 21:23:32 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:09.111 21:23:32 -- host/discovery.sh@147 -- # get_bdev_list 00:16:09.111 21:23:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:09.111 21:23:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.111 21:23:32 -- host/discovery.sh@55 -- # sort 00:16:09.111 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.111 21:23:32 -- host/discovery.sh@55 -- # xargs 00:16:09.111 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.111 21:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.111 21:23:32 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:09.111 21:23:32 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.111 21:23:32 -- common/autotest_common.sh@650 -- # local es=0 00:16:09.111 21:23:32 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.111 21:23:32 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:09.111 21:23:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.111 21:23:32 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:09.111 21:23:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.111 21:23:32 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.111 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.111 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.111 request: 00:16:09.112 { 00:16:09.112 "name": "nvme_second", 00:16:09.112 "trtype": "tcp", 00:16:09.112 "traddr": "10.0.0.2", 00:16:09.112 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:09.112 "adrfam": "ipv4", 00:16:09.112 "trsvcid": "8009", 00:16:09.112 "wait_for_attach": true, 00:16:09.112 "method": "bdev_nvme_start_discovery", 00:16:09.112 "req_id": 1 00:16:09.112 } 00:16:09.112 Got JSON-RPC error response 00:16:09.112 response: 00:16:09.112 { 00:16:09.112 "code": -17, 00:16:09.112 "message": "File exists" 00:16:09.112 } 00:16:09.112 21:23:32 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:09.112 21:23:32 -- common/autotest_common.sh@653 -- # es=1 00:16:09.112 21:23:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.112 21:23:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.112 21:23:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.112 21:23:32 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:09.112 21:23:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:09.112 21:23:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:09.112 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.112 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.112 21:23:32 -- host/discovery.sh@67 -- # sort 00:16:09.112 21:23:32 -- host/discovery.sh@67 -- # xargs 00:16:09.112 21:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.112 21:23:32 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:09.112 21:23:32 -- host/discovery.sh@153 -- # get_bdev_list 00:16:09.112 21:23:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.112 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.112 21:23:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:09.112 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.112 21:23:32 -- host/discovery.sh@55 -- # sort 00:16:09.112 21:23:32 -- host/discovery.sh@55 -- # xargs 00:16:09.112 21:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.112 21:23:32 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:09.112 21:23:32 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:09.112 21:23:32 -- common/autotest_common.sh@650 -- # local es=0 00:16:09.112 21:23:32 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:09.112 21:23:32 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:09.112 21:23:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.112 21:23:32 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:09.112 21:23:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.112 21:23:32 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:09.112 21:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.112 21:23:32 -- common/autotest_common.sh@10 -- # set +x 00:16:10.489 [2024-11-28 21:23:33.791441] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:10.489 [2024-11-28 21:23:33.791620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:10.489 [2024-11-28 21:23:33.791666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:10.489 [2024-11-28 21:23:33.791684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139a350 with addr=10.0.0.2, port=8010 00:16:10.489 [2024-11-28 21:23:33.791701] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:10.489 [2024-11-28 21:23:33.791711] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:10.489 [2024-11-28 21:23:33.791720] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:11.057 [2024-11-28 21:23:34.791459] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:11.057 [2024-11-28 21:23:34.791604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:11.057 [2024-11-28 21:23:34.791647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:11.057 [2024-11-28 21:23:34.791663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139a350 with addr=10.0.0.2, port=8010 00:16:11.057 [2024-11-28 21:23:34.791680] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:11.057 [2024-11-28 21:23:34.791689] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:11.057 [2024-11-28 21:23:34.791700] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:12.432 [2024-11-28 21:23:35.791307] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:12.432 request: 00:16:12.432 { 00:16:12.432 "name": "nvme_second", 00:16:12.432 "trtype": "tcp", 00:16:12.432 "traddr": "10.0.0.2", 00:16:12.432 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:12.432 "adrfam": "ipv4", 00:16:12.432 "trsvcid": "8010", 00:16:12.432 "attach_timeout_ms": 3000, 00:16:12.432 "method": "bdev_nvme_start_discovery", 00:16:12.432 "req_id": 1 00:16:12.432 } 00:16:12.432 Got JSON-RPC error response 00:16:12.432 response: 00:16:12.432 { 00:16:12.432 "code": -110, 00:16:12.432 "message": "Connection timed out" 00:16:12.432 } 00:16:12.432 21:23:35 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:12.432 21:23:35 -- common/autotest_common.sh@653 -- # es=1 00:16:12.432 21:23:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.432 21:23:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.432 21:23:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.432 21:23:35 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:12.432 21:23:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:12.432 21:23:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:12.432 21:23:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.432 21:23:35 -- common/autotest_common.sh@10 -- # set +x 00:16:12.432 21:23:35 -- host/discovery.sh@67 -- # sort 00:16:12.432 21:23:35 -- host/discovery.sh@67 -- # xargs 00:16:12.432 21:23:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.432 21:23:35 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:12.433 21:23:35 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:12.433 21:23:35 -- host/discovery.sh@162 -- # kill 82217 00:16:12.433 21:23:35 -- host/discovery.sh@163 -- # nvmftestfini 00:16:12.433 21:23:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.433 21:23:35 -- nvmf/common.sh@116 -- # sync 00:16:12.433 21:23:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.433 21:23:35 -- nvmf/common.sh@119 -- # set +e 00:16:12.433 21:23:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.433 21:23:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.433 rmmod nvme_tcp 00:16:12.433 rmmod nvme_fabrics 00:16:12.433 rmmod nvme_keyring 00:16:12.433 21:23:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.433 21:23:35 -- nvmf/common.sh@123 -- # set -e 00:16:12.433 21:23:35 -- nvmf/common.sh@124 -- # return 0 00:16:12.433 21:23:35 -- nvmf/common.sh@477 -- # '[' -n 82185 ']' 00:16:12.433 21:23:35 -- nvmf/common.sh@478 -- # killprocess 82185 00:16:12.433 21:23:35 -- common/autotest_common.sh@936 -- # '[' -z 82185 ']' 00:16:12.433 21:23:35 -- common/autotest_common.sh@940 -- # kill -0 82185 00:16:12.433 21:23:35 -- common/autotest_common.sh@941 -- # uname 00:16:12.433 21:23:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.433 21:23:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82185 00:16:12.433 21:23:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:12.433 21:23:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:12.433 killing process with pid 82185 00:16:12.433 21:23:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82185' 00:16:12.433 21:23:36 -- common/autotest_common.sh@955 -- # kill 82185 00:16:12.433 21:23:36 -- common/autotest_common.sh@960 -- # wait 82185 00:16:12.433 21:23:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.433 21:23:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.433 21:23:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.433 21:23:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.433 21:23:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.433 21:23:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.433 21:23:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.433 21:23:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.692 21:23:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:12.692 ************************************ 00:16:12.692 END TEST nvmf_discovery 00:16:12.692 ************************************ 00:16:12.692 00:16:12.692 real 0m13.932s 00:16:12.692 user 0m26.651s 00:16:12.692 sys 0m2.195s 00:16:12.692 21:23:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.692 21:23:36 -- common/autotest_common.sh@10 -- # set +x 00:16:12.692 21:23:36 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:12.692 21:23:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.692 21:23:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.692 21:23:36 -- common/autotest_common.sh@10 -- # set +x 00:16:12.692 ************************************ 00:16:12.692 START TEST nvmf_discovery_remove_ifc 00:16:12.692 ************************************ 00:16:12.692 21:23:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:12.692 * Looking for test storage... 00:16:12.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.692 21:23:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:12.692 21:23:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:12.692 21:23:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:12.692 21:23:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:12.692 21:23:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:12.692 21:23:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:12.692 21:23:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:12.692 21:23:36 -- scripts/common.sh@335 -- # IFS=.-: 00:16:12.692 21:23:36 -- scripts/common.sh@335 -- # read -ra ver1 00:16:12.692 21:23:36 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.692 21:23:36 -- scripts/common.sh@336 -- # read -ra ver2 00:16:12.692 21:23:36 -- scripts/common.sh@337 -- # local 'op=<' 00:16:12.692 21:23:36 -- scripts/common.sh@339 -- # ver1_l=2 00:16:12.692 21:23:36 -- scripts/common.sh@340 -- # ver2_l=1 00:16:12.692 21:23:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:12.692 21:23:36 -- scripts/common.sh@343 -- # case "$op" in 00:16:12.692 21:23:36 -- scripts/common.sh@344 -- # : 1 00:16:12.692 21:23:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:12.692 21:23:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.692 21:23:36 -- scripts/common.sh@364 -- # decimal 1 00:16:12.692 21:23:36 -- scripts/common.sh@352 -- # local d=1 00:16:12.692 21:23:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.692 21:23:36 -- scripts/common.sh@354 -- # echo 1 00:16:12.692 21:23:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:12.692 21:23:36 -- scripts/common.sh@365 -- # decimal 2 00:16:12.692 21:23:36 -- scripts/common.sh@352 -- # local d=2 00:16:12.692 21:23:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.692 21:23:36 -- scripts/common.sh@354 -- # echo 2 00:16:12.692 21:23:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:12.692 21:23:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:12.692 21:23:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:12.692 21:23:36 -- scripts/common.sh@367 -- # return 0 00:16:12.692 21:23:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.692 21:23:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:12.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.692 --rc genhtml_branch_coverage=1 00:16:12.692 --rc genhtml_function_coverage=1 00:16:12.692 --rc genhtml_legend=1 00:16:12.692 --rc geninfo_all_blocks=1 00:16:12.692 --rc geninfo_unexecuted_blocks=1 00:16:12.692 00:16:12.692 ' 00:16:12.692 21:23:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:12.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.692 --rc genhtml_branch_coverage=1 00:16:12.692 --rc genhtml_function_coverage=1 00:16:12.692 --rc genhtml_legend=1 00:16:12.692 --rc geninfo_all_blocks=1 00:16:12.692 --rc geninfo_unexecuted_blocks=1 00:16:12.692 00:16:12.692 ' 00:16:12.692 21:23:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:12.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.692 --rc genhtml_branch_coverage=1 00:16:12.692 --rc genhtml_function_coverage=1 00:16:12.692 --rc genhtml_legend=1 00:16:12.692 --rc geninfo_all_blocks=1 00:16:12.692 --rc geninfo_unexecuted_blocks=1 00:16:12.692 00:16:12.692 ' 00:16:12.692 21:23:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:12.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.692 --rc genhtml_branch_coverage=1 00:16:12.692 --rc genhtml_function_coverage=1 00:16:12.692 --rc genhtml_legend=1 00:16:12.692 --rc geninfo_all_blocks=1 00:16:12.692 --rc geninfo_unexecuted_blocks=1 00:16:12.692 00:16:12.692 ' 00:16:12.692 21:23:36 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.692 21:23:36 -- nvmf/common.sh@7 -- # uname -s 00:16:12.692 21:23:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.692 21:23:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.692 21:23:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.692 21:23:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.692 21:23:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.692 21:23:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.692 21:23:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.692 21:23:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.692 21:23:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.692 21:23:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.952 21:23:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:16:12.952 21:23:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:16:12.952 21:23:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.952 21:23:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.952 21:23:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.952 21:23:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.952 21:23:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.952 21:23:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.952 21:23:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.952 21:23:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.952 21:23:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.952 21:23:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.952 21:23:36 -- paths/export.sh@5 -- # export PATH 00:16:12.952 21:23:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.952 21:23:36 -- nvmf/common.sh@46 -- # : 0 00:16:12.952 21:23:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.952 21:23:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.952 21:23:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.952 21:23:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.952 21:23:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.952 21:23:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.952 21:23:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.952 21:23:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:12.952 21:23:36 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:12.952 21:23:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:12.952 21:23:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.952 21:23:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:12.952 21:23:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:12.952 21:23:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:12.952 21:23:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.952 21:23:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.952 21:23:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.952 21:23:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:12.952 21:23:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:12.952 21:23:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:12.952 21:23:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:12.952 21:23:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:12.952 21:23:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:12.952 21:23:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.952 21:23:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.952 21:23:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.952 21:23:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:12.952 21:23:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.952 21:23:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.952 21:23:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.952 21:23:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.952 21:23:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.952 21:23:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.952 21:23:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.952 21:23:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.952 21:23:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:12.952 21:23:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:12.952 Cannot find device "nvmf_tgt_br" 00:16:12.952 21:23:36 -- nvmf/common.sh@154 -- # true 00:16:12.952 21:23:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.952 Cannot find device "nvmf_tgt_br2" 00:16:12.952 21:23:36 -- nvmf/common.sh@155 -- # true 00:16:12.952 21:23:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:12.952 21:23:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:12.952 Cannot find device "nvmf_tgt_br" 00:16:12.952 21:23:36 -- nvmf/common.sh@157 -- # true 00:16:12.952 21:23:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:12.952 Cannot find device "nvmf_tgt_br2" 00:16:12.952 21:23:36 -- nvmf/common.sh@158 -- # true 00:16:12.952 21:23:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:12.953 21:23:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:12.953 21:23:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.953 21:23:36 -- nvmf/common.sh@161 -- # true 00:16:12.953 21:23:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.953 21:23:36 -- nvmf/common.sh@162 -- # true 00:16:12.953 21:23:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.953 21:23:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.953 21:23:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.953 21:23:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.953 21:23:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.953 21:23:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.953 21:23:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.953 21:23:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.953 21:23:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.953 21:23:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:12.953 21:23:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:12.953 21:23:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:12.953 21:23:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:12.953 21:23:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.953 21:23:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.953 21:23:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.212 21:23:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:13.212 21:23:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:13.212 21:23:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.212 21:23:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.212 21:23:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.212 21:23:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.212 21:23:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.212 21:23:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:13.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:13.212 00:16:13.212 --- 10.0.0.2 ping statistics --- 00:16:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.212 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:13.212 21:23:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:13.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:13.212 00:16:13.212 --- 10.0.0.3 ping statistics --- 00:16:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.212 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:13.212 21:23:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:13.212 00:16:13.212 --- 10.0.0.1 ping statistics --- 00:16:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.212 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:13.212 21:23:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.212 21:23:36 -- nvmf/common.sh@421 -- # return 0 00:16:13.212 21:23:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:13.212 21:23:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.212 21:23:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:13.212 21:23:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:13.212 21:23:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.212 21:23:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:13.212 21:23:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:13.212 21:23:36 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:13.212 21:23:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:13.212 21:23:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.212 21:23:36 -- common/autotest_common.sh@10 -- # set +x 00:16:13.212 21:23:36 -- nvmf/common.sh@469 -- # nvmfpid=82714 00:16:13.212 21:23:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.212 21:23:36 -- nvmf/common.sh@470 -- # waitforlisten 82714 00:16:13.212 21:23:36 -- common/autotest_common.sh@829 -- # '[' -z 82714 ']' 00:16:13.212 21:23:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.212 21:23:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.212 21:23:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.212 21:23:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.212 21:23:36 -- common/autotest_common.sh@10 -- # set +x 00:16:13.212 [2024-11-28 21:23:36.843574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:13.213 [2024-11-28 21:23:36.843659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.471 [2024-11-28 21:23:36.984062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.471 [2024-11-28 21:23:37.017269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.471 [2024-11-28 21:23:37.017461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.471 [2024-11-28 21:23:37.017474] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.471 [2024-11-28 21:23:37.017482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.471 [2024-11-28 21:23:37.017505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.038 21:23:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.038 21:23:37 -- common/autotest_common.sh@862 -- # return 0 00:16:14.038 21:23:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.038 21:23:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.038 21:23:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.296 21:23:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.297 21:23:37 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:14.297 21:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.297 21:23:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.297 [2024-11-28 21:23:37.828347] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.297 [2024-11-28 21:23:37.836524] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:14.297 null0 00:16:14.297 [2024-11-28 21:23:37.868409] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.297 21:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.297 21:23:37 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82746 00:16:14.297 21:23:37 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82746 /tmp/host.sock 00:16:14.297 21:23:37 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:14.297 21:23:37 -- common/autotest_common.sh@829 -- # '[' -z 82746 ']' 00:16:14.297 21:23:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:14.297 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:14.297 21:23:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.297 21:23:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:14.297 21:23:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.297 21:23:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.297 [2024-11-28 21:23:37.941321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:14.297 [2024-11-28 21:23:37.941455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82746 ] 00:16:14.555 [2024-11-28 21:23:38.082088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.555 [2024-11-28 21:23:38.122719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.555 [2024-11-28 21:23:38.122895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.124 21:23:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.124 21:23:38 -- common/autotest_common.sh@862 -- # return 0 00:16:15.124 21:23:38 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.124 21:23:38 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:15.124 21:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.124 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:16:15.124 21:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.124 21:23:38 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:15.124 21:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.124 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:16:15.383 21:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.383 21:23:38 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:15.383 21:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.383 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:16:16.319 [2024-11-28 21:23:39.921453] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:16.319 [2024-11-28 21:23:39.921515] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:16.319 [2024-11-28 21:23:39.921534] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:16.319 [2024-11-28 21:23:39.927515] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:16.319 [2024-11-28 21:23:39.983004] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:16.319 [2024-11-28 21:23:39.983079] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:16.319 [2024-11-28 21:23:39.983105] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:16.319 [2024-11-28 21:23:39.983120] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:16.319 [2024-11-28 21:23:39.983171] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:16.319 21:23:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.319 21:23:39 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:16.319 21:23:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.319 21:23:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.319 [2024-11-28 21:23:39.989950] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d3baf0 was disconnected and freed. delete nvme_qpair. 00:16:16.319 21:23:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.319 21:23:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.319 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:16:16.319 21:23:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.319 21:23:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.319 21:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.319 21:23:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:16.319 21:23:40 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.319 21:23:40 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:16.319 21:23:40 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:16.319 21:23:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.320 21:23:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.320 21:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.578 21:23:40 -- common/autotest_common.sh@10 -- # set +x 00:16:16.578 21:23:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.578 21:23:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.578 21:23:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.578 21:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.578 21:23:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:16.578 21:23:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:17.527 21:23:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.527 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:17.527 21:23:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:17.527 21:23:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:18.473 21:23:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:18.473 21:23:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.473 21:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.473 21:23:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:18.473 21:23:42 -- common/autotest_common.sh@10 -- # set +x 00:16:18.473 21:23:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:18.473 21:23:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:18.473 21:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.733 21:23:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:18.733 21:23:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:19.669 21:23:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:19.669 21:23:43 -- common/autotest_common.sh@10 -- # set +x 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:19.669 21:23:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:19.669 21:23:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:20.605 21:23:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:20.605 21:23:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.605 21:23:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:20.605 21:23:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:20.605 21:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.605 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:16:20.605 21:23:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:20.605 21:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.864 21:23:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:20.864 21:23:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.801 21:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.801 21:23:45 -- common/autotest_common.sh@10 -- # set +x 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:21.801 21:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:21.801 21:23:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:21.801 [2024-11-28 21:23:45.421627] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:21.801 [2024-11-28 21:23:45.421717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.801 [2024-11-28 21:23:45.421732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.801 [2024-11-28 21:23:45.421744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.801 [2024-11-28 21:23:45.421752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.801 [2024-11-28 21:23:45.421760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.801 [2024-11-28 21:23:45.421769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.801 [2024-11-28 21:23:45.421778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.801 [2024-11-28 21:23:45.421785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.801 [2024-11-28 21:23:45.421794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.801 [2024-11-28 21:23:45.421802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.801 [2024-11-28 21:23:45.421811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00890 is same with the state(5) to be set 00:16:21.801 [2024-11-28 21:23:45.431622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d00890 (9): Bad file descriptor 00:16:21.801 [2024-11-28 21:23:45.441640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:22.739 21:23:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.739 21:23:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.739 21:23:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.739 21:23:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.739 21:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.739 21:23:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.739 21:23:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.739 [2024-11-28 21:23:46.459176] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:24.117 [2024-11-28 21:23:47.482146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:25.054 [2024-11-28 21:23:48.506167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:25.054 [2024-11-28 21:23:48.506550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d00890 with addr=10.0.0.2, port=4420 00:16:25.054 [2024-11-28 21:23:48.506602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00890 is same with the state(5) to be set 00:16:25.054 [2024-11-28 21:23:48.506658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:25.054 [2024-11-28 21:23:48.506681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:25.054 [2024-11-28 21:23:48.506700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:25.054 [2024-11-28 21:23:48.506721] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:25.054 [2024-11-28 21:23:48.507598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d00890 (9): Bad file descriptor 00:16:25.054 [2024-11-28 21:23:48.507673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:25.054 [2024-11-28 21:23:48.507724] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:25.054 [2024-11-28 21:23:48.507791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.054 [2024-11-28 21:23:48.507820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.054 [2024-11-28 21:23:48.507847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.054 [2024-11-28 21:23:48.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.054 [2024-11-28 21:23:48.507889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.054 [2024-11-28 21:23:48.507909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.054 [2024-11-28 21:23:48.507930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.054 [2024-11-28 21:23:48.507950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.054 [2024-11-28 21:23:48.507973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.054 [2024-11-28 21:23:48.507992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.054 [2024-11-28 21:23:48.508041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:25.054 [2024-11-28 21:23:48.508074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cffef0 (9): Bad file descriptor 00:16:25.054 [2024-11-28 21:23:48.508702] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:25.054 [2024-11-28 21:23:48.508732] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:25.054 21:23:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.054 21:23:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.054 21:23:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.989 21:23:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.989 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.989 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.989 21:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.989 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.989 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.989 21:23:49 -- common/autotest_common.sh@10 -- # set +x 00:16:25.989 21:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.989 21:23:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.990 21:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.990 21:23:49 -- common/autotest_common.sh@10 -- # set +x 00:16:25.990 21:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:25.990 21:23:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:26.923 [2024-11-28 21:23:50.516821] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:26.924 [2024-11-28 21:23:50.516854] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:26.924 [2024-11-28 21:23:50.516888] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:26.924 [2024-11-28 21:23:50.522854] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:26.924 [2024-11-28 21:23:50.577765] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:26.924 [2024-11-28 21:23:50.577828] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:26.924 [2024-11-28 21:23:50.577850] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:26.924 [2024-11-28 21:23:50.577864] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:26.924 [2024-11-28 21:23:50.577873] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:26.924 [2024-11-28 21:23:50.585335] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cefe30 was disconnected and freed. delete nvme_qpair. 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.182 21:23:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.182 21:23:50 -- common/autotest_common.sh@10 -- # set +x 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.182 21:23:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82746 00:16:27.182 21:23:50 -- common/autotest_common.sh@936 -- # '[' -z 82746 ']' 00:16:27.182 21:23:50 -- common/autotest_common.sh@940 -- # kill -0 82746 00:16:27.182 21:23:50 -- common/autotest_common.sh@941 -- # uname 00:16:27.182 21:23:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:27.182 21:23:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82746 00:16:27.182 killing process with pid 82746 00:16:27.182 21:23:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:27.182 21:23:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:27.182 21:23:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82746' 00:16:27.182 21:23:50 -- common/autotest_common.sh@955 -- # kill 82746 00:16:27.182 21:23:50 -- common/autotest_common.sh@960 -- # wait 82746 00:16:27.182 21:23:50 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:27.182 21:23:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:27.182 21:23:50 -- nvmf/common.sh@116 -- # sync 00:16:27.440 21:23:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:27.440 21:23:50 -- nvmf/common.sh@119 -- # set +e 00:16:27.440 21:23:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:27.440 21:23:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:27.440 rmmod nvme_tcp 00:16:27.440 rmmod nvme_fabrics 00:16:27.440 rmmod nvme_keyring 00:16:27.440 21:23:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:27.440 21:23:51 -- nvmf/common.sh@123 -- # set -e 00:16:27.440 21:23:51 -- nvmf/common.sh@124 -- # return 0 00:16:27.440 21:23:51 -- nvmf/common.sh@477 -- # '[' -n 82714 ']' 00:16:27.440 21:23:51 -- nvmf/common.sh@478 -- # killprocess 82714 00:16:27.440 21:23:51 -- common/autotest_common.sh@936 -- # '[' -z 82714 ']' 00:16:27.440 21:23:51 -- common/autotest_common.sh@940 -- # kill -0 82714 00:16:27.440 21:23:51 -- common/autotest_common.sh@941 -- # uname 00:16:27.440 21:23:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:27.440 21:23:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82714 00:16:27.440 killing process with pid 82714 00:16:27.440 21:23:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:27.440 21:23:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:27.440 21:23:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82714' 00:16:27.440 21:23:51 -- common/autotest_common.sh@955 -- # kill 82714 00:16:27.440 21:23:51 -- common/autotest_common.sh@960 -- # wait 82714 00:16:27.699 21:23:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:27.699 21:23:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:27.699 21:23:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:27.699 21:23:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.699 21:23:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:27.699 21:23:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.699 21:23:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.699 21:23:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.699 21:23:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:27.699 00:16:27.699 real 0m14.981s 00:16:27.699 user 0m23.922s 00:16:27.699 sys 0m2.482s 00:16:27.699 21:23:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:27.699 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:16:27.699 ************************************ 00:16:27.699 END TEST nvmf_discovery_remove_ifc 00:16:27.699 ************************************ 00:16:27.699 21:23:51 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:27.699 21:23:51 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:27.699 21:23:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:27.699 21:23:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.699 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:16:27.699 ************************************ 00:16:27.699 START TEST nvmf_digest 00:16:27.699 ************************************ 00:16:27.699 21:23:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:27.699 * Looking for test storage... 00:16:27.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.699 21:23:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:27.699 21:23:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:27.699 21:23:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:27.699 21:23:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:27.699 21:23:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:27.699 21:23:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:27.699 21:23:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:27.699 21:23:51 -- scripts/common.sh@335 -- # IFS=.-: 00:16:27.699 21:23:51 -- scripts/common.sh@335 -- # read -ra ver1 00:16:27.699 21:23:51 -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.699 21:23:51 -- scripts/common.sh@336 -- # read -ra ver2 00:16:27.699 21:23:51 -- scripts/common.sh@337 -- # local 'op=<' 00:16:27.699 21:23:51 -- scripts/common.sh@339 -- # ver1_l=2 00:16:27.699 21:23:51 -- scripts/common.sh@340 -- # ver2_l=1 00:16:27.699 21:23:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:27.699 21:23:51 -- scripts/common.sh@343 -- # case "$op" in 00:16:27.699 21:23:51 -- scripts/common.sh@344 -- # : 1 00:16:27.699 21:23:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:27.699 21:23:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.699 21:23:51 -- scripts/common.sh@364 -- # decimal 1 00:16:27.699 21:23:51 -- scripts/common.sh@352 -- # local d=1 00:16:27.699 21:23:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.699 21:23:51 -- scripts/common.sh@354 -- # echo 1 00:16:27.699 21:23:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:27.699 21:23:51 -- scripts/common.sh@365 -- # decimal 2 00:16:27.699 21:23:51 -- scripts/common.sh@352 -- # local d=2 00:16:27.699 21:23:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.699 21:23:51 -- scripts/common.sh@354 -- # echo 2 00:16:27.699 21:23:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:27.699 21:23:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:27.699 21:23:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:27.699 21:23:51 -- scripts/common.sh@367 -- # return 0 00:16:27.699 21:23:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.699 21:23:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:27.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.699 --rc genhtml_branch_coverage=1 00:16:27.700 --rc genhtml_function_coverage=1 00:16:27.700 --rc genhtml_legend=1 00:16:27.700 --rc geninfo_all_blocks=1 00:16:27.700 --rc geninfo_unexecuted_blocks=1 00:16:27.700 00:16:27.700 ' 00:16:27.700 21:23:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.700 --rc genhtml_branch_coverage=1 00:16:27.700 --rc genhtml_function_coverage=1 00:16:27.700 --rc genhtml_legend=1 00:16:27.700 --rc geninfo_all_blocks=1 00:16:27.700 --rc geninfo_unexecuted_blocks=1 00:16:27.700 00:16:27.700 ' 00:16:27.700 21:23:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.700 --rc genhtml_branch_coverage=1 00:16:27.700 --rc genhtml_function_coverage=1 00:16:27.700 --rc genhtml_legend=1 00:16:27.700 --rc geninfo_all_blocks=1 00:16:27.700 --rc geninfo_unexecuted_blocks=1 00:16:27.700 00:16:27.700 ' 00:16:27.700 21:23:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.700 --rc genhtml_branch_coverage=1 00:16:27.700 --rc genhtml_function_coverage=1 00:16:27.700 --rc genhtml_legend=1 00:16:27.700 --rc geninfo_all_blocks=1 00:16:27.700 --rc geninfo_unexecuted_blocks=1 00:16:27.700 00:16:27.700 ' 00:16:27.700 21:23:51 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.700 21:23:51 -- nvmf/common.sh@7 -- # uname -s 00:16:27.700 21:23:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.700 21:23:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.700 21:23:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.700 21:23:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.700 21:23:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.700 21:23:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.700 21:23:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.700 21:23:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.700 21:23:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.700 21:23:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.959 21:23:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:16:27.959 21:23:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:16:27.959 21:23:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.959 21:23:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.959 21:23:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.959 21:23:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.959 21:23:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.959 21:23:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.959 21:23:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.959 21:23:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.959 21:23:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.959 21:23:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.959 21:23:51 -- paths/export.sh@5 -- # export PATH 00:16:27.959 21:23:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.959 21:23:51 -- nvmf/common.sh@46 -- # : 0 00:16:27.959 21:23:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:27.959 21:23:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:27.959 21:23:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:27.959 21:23:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.959 21:23:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.959 21:23:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:27.959 21:23:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:27.959 21:23:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:27.959 21:23:51 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:27.959 21:23:51 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:27.959 21:23:51 -- host/digest.sh@16 -- # runtime=2 00:16:27.959 21:23:51 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:27.959 21:23:51 -- host/digest.sh@132 -- # nvmftestinit 00:16:27.959 21:23:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:27.959 21:23:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.959 21:23:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:27.959 21:23:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:27.959 21:23:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:27.959 21:23:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.959 21:23:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.959 21:23:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.959 21:23:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:27.959 21:23:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:27.959 21:23:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:27.959 21:23:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:27.959 21:23:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:27.959 21:23:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:27.959 21:23:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.959 21:23:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.959 21:23:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.959 21:23:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:27.959 21:23:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.959 21:23:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.959 21:23:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.959 21:23:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.959 21:23:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.959 21:23:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.959 21:23:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.959 21:23:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.959 21:23:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:27.959 21:23:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:27.959 Cannot find device "nvmf_tgt_br" 00:16:27.959 21:23:51 -- nvmf/common.sh@154 -- # true 00:16:27.959 21:23:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.959 Cannot find device "nvmf_tgt_br2" 00:16:27.959 21:23:51 -- nvmf/common.sh@155 -- # true 00:16:27.959 21:23:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:27.959 21:23:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:27.959 Cannot find device "nvmf_tgt_br" 00:16:27.959 21:23:51 -- nvmf/common.sh@157 -- # true 00:16:27.959 21:23:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:27.959 Cannot find device "nvmf_tgt_br2" 00:16:27.959 21:23:51 -- nvmf/common.sh@158 -- # true 00:16:27.959 21:23:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:27.959 21:23:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:27.959 21:23:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.959 21:23:51 -- nvmf/common.sh@161 -- # true 00:16:27.959 21:23:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.959 21:23:51 -- nvmf/common.sh@162 -- # true 00:16:27.959 21:23:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.959 21:23:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.959 21:23:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.959 21:23:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.959 21:23:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.959 21:23:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.959 21:23:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.959 21:23:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:28.217 21:23:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:28.217 21:23:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:28.217 21:23:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:28.217 21:23:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:28.217 21:23:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:28.217 21:23:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.218 21:23:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.218 21:23:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.218 21:23:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:28.218 21:23:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:28.218 21:23:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.218 21:23:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.218 21:23:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.218 21:23:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.218 21:23:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.218 21:23:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:28.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:28.218 00:16:28.218 --- 10.0.0.2 ping statistics --- 00:16:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.218 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:28.218 21:23:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:28.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:16:28.218 00:16:28.218 --- 10.0.0.3 ping statistics --- 00:16:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.218 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:28.218 21:23:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:28.218 00:16:28.218 --- 10.0.0.1 ping statistics --- 00:16:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.218 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:28.218 21:23:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.218 21:23:51 -- nvmf/common.sh@421 -- # return 0 00:16:28.218 21:23:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:28.218 21:23:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.218 21:23:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:28.218 21:23:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:28.218 21:23:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.218 21:23:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:28.218 21:23:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:28.218 21:23:51 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:28.218 21:23:51 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:28.218 21:23:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:28.218 21:23:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.218 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.218 ************************************ 00:16:28.218 START TEST nvmf_digest_clean 00:16:28.218 ************************************ 00:16:28.218 21:23:51 -- common/autotest_common.sh@1114 -- # run_digest 00:16:28.218 21:23:51 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:28.218 21:23:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:28.218 21:23:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.218 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.218 21:23:51 -- nvmf/common.sh@469 -- # nvmfpid=83168 00:16:28.218 21:23:51 -- nvmf/common.sh@470 -- # waitforlisten 83168 00:16:28.218 21:23:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:28.218 21:23:51 -- common/autotest_common.sh@829 -- # '[' -z 83168 ']' 00:16:28.218 21:23:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.218 21:23:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.218 21:23:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.218 21:23:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.218 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.218 [2024-11-28 21:23:51.903724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:28.218 [2024-11-28 21:23:51.903817] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.476 [2024-11-28 21:23:52.042811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.476 [2024-11-28 21:23:52.076705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:28.476 [2024-11-28 21:23:52.076847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.476 [2024-11-28 21:23:52.076866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.476 [2024-11-28 21:23:52.076878] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.476 [2024-11-28 21:23:52.076915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.412 21:23:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.412 21:23:52 -- common/autotest_common.sh@862 -- # return 0 00:16:29.412 21:23:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:29.412 21:23:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.412 21:23:52 -- common/autotest_common.sh@10 -- # set +x 00:16:29.412 21:23:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.412 21:23:52 -- host/digest.sh@120 -- # common_target_config 00:16:29.412 21:23:52 -- host/digest.sh@43 -- # rpc_cmd 00:16:29.412 21:23:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.412 21:23:52 -- common/autotest_common.sh@10 -- # set +x 00:16:29.412 null0 00:16:29.412 [2024-11-28 21:23:52.921271] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.412 [2024-11-28 21:23:52.945362] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:29.412 21:23:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.412 21:23:52 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:29.412 21:23:52 -- host/digest.sh@77 -- # local rw bs qd 00:16:29.412 21:23:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:29.412 21:23:52 -- host/digest.sh@80 -- # rw=randread 00:16:29.412 21:23:52 -- host/digest.sh@80 -- # bs=4096 00:16:29.412 21:23:52 -- host/digest.sh@80 -- # qd=128 00:16:29.412 21:23:52 -- host/digest.sh@82 -- # bperfpid=83200 00:16:29.412 21:23:52 -- host/digest.sh@83 -- # waitforlisten 83200 /var/tmp/bperf.sock 00:16:29.412 21:23:52 -- common/autotest_common.sh@829 -- # '[' -z 83200 ']' 00:16:29.412 21:23:52 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:29.412 21:23:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:29.412 21:23:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.412 21:23:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:29.412 21:23:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.412 21:23:52 -- common/autotest_common.sh@10 -- # set +x 00:16:29.412 [2024-11-28 21:23:53.012656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:29.412 [2024-11-28 21:23:53.013267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83200 ] 00:16:29.671 [2024-11-28 21:23:53.162202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.671 [2024-11-28 21:23:53.204365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.273 21:23:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.274 21:23:53 -- common/autotest_common.sh@862 -- # return 0 00:16:30.274 21:23:53 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:30.274 21:23:53 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:30.274 21:23:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:30.532 21:23:54 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:30.532 21:23:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.100 nvme0n1 00:16:31.100 21:23:54 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:31.100 21:23:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.100 Running I/O for 2 seconds... 00:16:33.002 00:16:33.002 Latency(us) 00:16:33.002 [2024-11-28T21:23:56.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.002 [2024-11-28T21:23:56.745Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:33.002 nvme0n1 : 2.01 16506.09 64.48 0.00 0.00 7749.63 7089.80 20733.21 00:16:33.002 [2024-11-28T21:23:56.745Z] =================================================================================================================== 00:16:33.002 [2024-11-28T21:23:56.745Z] Total : 16506.09 64.48 0.00 0.00 7749.63 7089.80 20733.21 00:16:33.002 0 00:16:33.002 21:23:56 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:33.002 21:23:56 -- host/digest.sh@92 -- # get_accel_stats 00:16:33.002 21:23:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:33.002 21:23:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:33.002 21:23:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:33.002 | select(.opcode=="crc32c") 00:16:33.002 | "\(.module_name) \(.executed)"' 00:16:33.570 21:23:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:33.570 21:23:57 -- host/digest.sh@93 -- # exp_module=software 00:16:33.570 21:23:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:33.570 21:23:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:33.570 21:23:57 -- host/digest.sh@97 -- # killprocess 83200 00:16:33.570 21:23:57 -- common/autotest_common.sh@936 -- # '[' -z 83200 ']' 00:16:33.570 21:23:57 -- common/autotest_common.sh@940 -- # kill -0 83200 00:16:33.570 21:23:57 -- common/autotest_common.sh@941 -- # uname 00:16:33.570 21:23:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.570 21:23:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83200 00:16:33.570 21:23:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:33.570 21:23:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:33.570 killing process with pid 83200 00:16:33.570 21:23:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83200' 00:16:33.570 Received shutdown signal, test time was about 2.000000 seconds 00:16:33.570 00:16:33.570 Latency(us) 00:16:33.570 [2024-11-28T21:23:57.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.570 [2024-11-28T21:23:57.313Z] =================================================================================================================== 00:16:33.570 [2024-11-28T21:23:57.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.570 21:23:57 -- common/autotest_common.sh@955 -- # kill 83200 00:16:33.570 21:23:57 -- common/autotest_common.sh@960 -- # wait 83200 00:16:33.570 21:23:57 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:33.570 21:23:57 -- host/digest.sh@77 -- # local rw bs qd 00:16:33.570 21:23:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:33.570 21:23:57 -- host/digest.sh@80 -- # rw=randread 00:16:33.570 21:23:57 -- host/digest.sh@80 -- # bs=131072 00:16:33.570 21:23:57 -- host/digest.sh@80 -- # qd=16 00:16:33.570 21:23:57 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:33.570 21:23:57 -- host/digest.sh@82 -- # bperfpid=83262 00:16:33.570 21:23:57 -- host/digest.sh@83 -- # waitforlisten 83262 /var/tmp/bperf.sock 00:16:33.570 21:23:57 -- common/autotest_common.sh@829 -- # '[' -z 83262 ']' 00:16:33.570 21:23:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:33.570 21:23:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:33.570 21:23:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:33.570 21:23:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.570 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:16:33.570 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:33.570 Zero copy mechanism will not be used. 00:16:33.570 [2024-11-28 21:23:57.260164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:33.570 [2024-11-28 21:23:57.260253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83262 ] 00:16:33.829 [2024-11-28 21:23:57.393341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.829 [2024-11-28 21:23:57.427336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.829 21:23:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.829 21:23:57 -- common/autotest_common.sh@862 -- # return 0 00:16:33.829 21:23:57 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:33.829 21:23:57 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:33.829 21:23:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:34.088 21:23:57 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:34.088 21:23:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:34.347 nvme0n1 00:16:34.347 21:23:58 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:34.347 21:23:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:34.605 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:34.605 Zero copy mechanism will not be used. 00:16:34.605 Running I/O for 2 seconds... 00:16:36.510 00:16:36.510 Latency(us) 00:16:36.510 [2024-11-28T21:24:00.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.510 [2024-11-28T21:24:00.253Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:36.510 nvme0n1 : 2.00 8063.62 1007.95 0.00 0.00 1981.53 1720.32 7447.27 00:16:36.510 [2024-11-28T21:24:00.253Z] =================================================================================================================== 00:16:36.510 [2024-11-28T21:24:00.253Z] Total : 8063.62 1007.95 0.00 0.00 1981.53 1720.32 7447.27 00:16:36.510 0 00:16:36.510 21:24:00 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:36.510 21:24:00 -- host/digest.sh@92 -- # get_accel_stats 00:16:36.510 21:24:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:36.510 21:24:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:36.510 21:24:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:36.510 | select(.opcode=="crc32c") 00:16:36.510 | "\(.module_name) \(.executed)"' 00:16:36.770 21:24:00 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:36.770 21:24:00 -- host/digest.sh@93 -- # exp_module=software 00:16:36.770 21:24:00 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:36.770 21:24:00 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:36.770 21:24:00 -- host/digest.sh@97 -- # killprocess 83262 00:16:36.770 21:24:00 -- common/autotest_common.sh@936 -- # '[' -z 83262 ']' 00:16:36.770 21:24:00 -- common/autotest_common.sh@940 -- # kill -0 83262 00:16:36.770 21:24:00 -- common/autotest_common.sh@941 -- # uname 00:16:36.770 21:24:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.770 21:24:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83262 00:16:36.770 21:24:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:36.770 21:24:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:36.770 killing process with pid 83262 00:16:36.770 21:24:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83262' 00:16:36.770 Received shutdown signal, test time was about 2.000000 seconds 00:16:36.770 00:16:36.770 Latency(us) 00:16:36.770 [2024-11-28T21:24:00.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.770 [2024-11-28T21:24:00.513Z] =================================================================================================================== 00:16:36.770 [2024-11-28T21:24:00.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:36.770 21:24:00 -- common/autotest_common.sh@955 -- # kill 83262 00:16:36.770 21:24:00 -- common/autotest_common.sh@960 -- # wait 83262 00:16:37.030 21:24:00 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:37.030 21:24:00 -- host/digest.sh@77 -- # local rw bs qd 00:16:37.030 21:24:00 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:37.030 21:24:00 -- host/digest.sh@80 -- # rw=randwrite 00:16:37.030 21:24:00 -- host/digest.sh@80 -- # bs=4096 00:16:37.030 21:24:00 -- host/digest.sh@80 -- # qd=128 00:16:37.030 21:24:00 -- host/digest.sh@82 -- # bperfpid=83315 00:16:37.030 21:24:00 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:37.030 21:24:00 -- host/digest.sh@83 -- # waitforlisten 83315 /var/tmp/bperf.sock 00:16:37.030 21:24:00 -- common/autotest_common.sh@829 -- # '[' -z 83315 ']' 00:16:37.030 21:24:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:37.030 21:24:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:37.030 21:24:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:37.030 21:24:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.030 21:24:00 -- common/autotest_common.sh@10 -- # set +x 00:16:37.030 [2024-11-28 21:24:00.614880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:37.030 [2024-11-28 21:24:00.614986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83315 ] 00:16:37.030 [2024-11-28 21:24:00.748731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.289 [2024-11-28 21:24:00.782980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.289 21:24:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.289 21:24:00 -- common/autotest_common.sh@862 -- # return 0 00:16:37.289 21:24:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:37.289 21:24:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:37.289 21:24:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:37.547 21:24:01 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:37.547 21:24:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:37.805 nvme0n1 00:16:37.805 21:24:01 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:37.805 21:24:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:38.063 Running I/O for 2 seconds... 00:16:39.970 00:16:39.970 Latency(us) 00:16:39.970 [2024-11-28T21:24:03.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.970 [2024-11-28T21:24:03.713Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.970 nvme0n1 : 2.01 16975.01 66.31 0.00 0.00 7534.45 6613.18 15609.48 00:16:39.970 [2024-11-28T21:24:03.713Z] =================================================================================================================== 00:16:39.970 [2024-11-28T21:24:03.713Z] Total : 16975.01 66.31 0.00 0.00 7534.45 6613.18 15609.48 00:16:39.970 0 00:16:39.970 21:24:03 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:39.970 21:24:03 -- host/digest.sh@92 -- # get_accel_stats 00:16:39.970 21:24:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:39.970 | select(.opcode=="crc32c") 00:16:39.970 | "\(.module_name) \(.executed)"' 00:16:39.970 21:24:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:39.970 21:24:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:40.229 21:24:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:40.229 21:24:03 -- host/digest.sh@93 -- # exp_module=software 00:16:40.229 21:24:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:40.229 21:24:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:40.229 21:24:03 -- host/digest.sh@97 -- # killprocess 83315 00:16:40.229 21:24:03 -- common/autotest_common.sh@936 -- # '[' -z 83315 ']' 00:16:40.229 21:24:03 -- common/autotest_common.sh@940 -- # kill -0 83315 00:16:40.229 21:24:03 -- common/autotest_common.sh@941 -- # uname 00:16:40.229 21:24:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.229 21:24:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83315 00:16:40.229 killing process with pid 83315 00:16:40.229 Received shutdown signal, test time was about 2.000000 seconds 00:16:40.229 00:16:40.229 Latency(us) 00:16:40.229 [2024-11-28T21:24:03.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.229 [2024-11-28T21:24:03.972Z] =================================================================================================================== 00:16:40.229 [2024-11-28T21:24:03.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.229 21:24:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.229 21:24:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.229 21:24:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83315' 00:16:40.229 21:24:03 -- common/autotest_common.sh@955 -- # kill 83315 00:16:40.229 21:24:03 -- common/autotest_common.sh@960 -- # wait 83315 00:16:40.489 21:24:04 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:40.489 21:24:04 -- host/digest.sh@77 -- # local rw bs qd 00:16:40.489 21:24:04 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:40.489 21:24:04 -- host/digest.sh@80 -- # rw=randwrite 00:16:40.489 21:24:04 -- host/digest.sh@80 -- # bs=131072 00:16:40.489 21:24:04 -- host/digest.sh@80 -- # qd=16 00:16:40.489 21:24:04 -- host/digest.sh@82 -- # bperfpid=83363 00:16:40.489 21:24:04 -- host/digest.sh@83 -- # waitforlisten 83363 /var/tmp/bperf.sock 00:16:40.489 21:24:04 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:40.489 21:24:04 -- common/autotest_common.sh@829 -- # '[' -z 83363 ']' 00:16:40.489 21:24:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:40.489 21:24:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.489 21:24:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:40.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:40.489 21:24:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.489 21:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:40.489 [2024-11-28 21:24:04.126360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:40.489 [2024-11-28 21:24:04.126726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83363 ] 00:16:40.489 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:40.489 Zero copy mechanism will not be used. 00:16:40.748 [2024-11-28 21:24:04.259113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.748 [2024-11-28 21:24:04.293126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.683 21:24:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.683 21:24:05 -- common/autotest_common.sh@862 -- # return 0 00:16:41.683 21:24:05 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:41.683 21:24:05 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:41.683 21:24:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:41.683 21:24:05 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:41.683 21:24:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:41.942 nvme0n1 00:16:41.942 21:24:05 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:41.942 21:24:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:42.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:42.200 Zero copy mechanism will not be used. 00:16:42.200 Running I/O for 2 seconds... 00:16:44.135 00:16:44.135 Latency(us) 00:16:44.135 [2024-11-28T21:24:07.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.135 [2024-11-28T21:24:07.878Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:44.135 nvme0n1 : 2.00 6592.39 824.05 0.00 0.00 2421.95 2085.24 10664.49 00:16:44.135 [2024-11-28T21:24:07.878Z] =================================================================================================================== 00:16:44.135 [2024-11-28T21:24:07.878Z] Total : 6592.39 824.05 0.00 0.00 2421.95 2085.24 10664.49 00:16:44.135 0 00:16:44.135 21:24:07 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:44.135 21:24:07 -- host/digest.sh@92 -- # get_accel_stats 00:16:44.135 21:24:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:44.135 21:24:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:44.135 | select(.opcode=="crc32c") 00:16:44.135 | "\(.module_name) \(.executed)"' 00:16:44.135 21:24:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:44.394 21:24:08 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:44.394 21:24:08 -- host/digest.sh@93 -- # exp_module=software 00:16:44.394 21:24:08 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:44.394 21:24:08 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:44.394 21:24:08 -- host/digest.sh@97 -- # killprocess 83363 00:16:44.394 21:24:08 -- common/autotest_common.sh@936 -- # '[' -z 83363 ']' 00:16:44.394 21:24:08 -- common/autotest_common.sh@940 -- # kill -0 83363 00:16:44.394 21:24:08 -- common/autotest_common.sh@941 -- # uname 00:16:44.394 21:24:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.394 21:24:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83363 00:16:44.394 killing process with pid 83363 00:16:44.394 Received shutdown signal, test time was about 2.000000 seconds 00:16:44.394 00:16:44.394 Latency(us) 00:16:44.394 [2024-11-28T21:24:08.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.394 [2024-11-28T21:24:08.137Z] =================================================================================================================== 00:16:44.394 [2024-11-28T21:24:08.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.394 21:24:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:44.394 21:24:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:44.394 21:24:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83363' 00:16:44.394 21:24:08 -- common/autotest_common.sh@955 -- # kill 83363 00:16:44.394 21:24:08 -- common/autotest_common.sh@960 -- # wait 83363 00:16:44.653 21:24:08 -- host/digest.sh@126 -- # killprocess 83168 00:16:44.653 21:24:08 -- common/autotest_common.sh@936 -- # '[' -z 83168 ']' 00:16:44.653 21:24:08 -- common/autotest_common.sh@940 -- # kill -0 83168 00:16:44.653 21:24:08 -- common/autotest_common.sh@941 -- # uname 00:16:44.653 21:24:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.653 21:24:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83168 00:16:44.653 killing process with pid 83168 00:16:44.653 21:24:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:44.653 21:24:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:44.653 21:24:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83168' 00:16:44.653 21:24:08 -- common/autotest_common.sh@955 -- # kill 83168 00:16:44.653 21:24:08 -- common/autotest_common.sh@960 -- # wait 83168 00:16:44.653 00:16:44.653 real 0m16.545s 00:16:44.653 user 0m31.793s 00:16:44.653 sys 0m4.427s 00:16:44.653 21:24:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:44.653 ************************************ 00:16:44.653 END TEST nvmf_digest_clean 00:16:44.653 ************************************ 00:16:44.653 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:44.912 21:24:08 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:44.912 21:24:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:44.912 21:24:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.912 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:44.912 ************************************ 00:16:44.912 START TEST nvmf_digest_error 00:16:44.912 ************************************ 00:16:44.912 21:24:08 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:44.912 21:24:08 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:44.912 21:24:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:44.912 21:24:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.912 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:44.912 21:24:08 -- nvmf/common.sh@469 -- # nvmfpid=83446 00:16:44.912 21:24:08 -- nvmf/common.sh@470 -- # waitforlisten 83446 00:16:44.912 21:24:08 -- common/autotest_common.sh@829 -- # '[' -z 83446 ']' 00:16:44.912 21:24:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:44.912 21:24:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.912 21:24:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.912 21:24:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.912 21:24:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.912 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:44.912 [2024-11-28 21:24:08.504749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:44.912 [2024-11-28 21:24:08.504856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.912 [2024-11-28 21:24:08.643429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.170 [2024-11-28 21:24:08.678935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.170 [2024-11-28 21:24:08.679128] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.170 [2024-11-28 21:24:08.679167] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.170 [2024-11-28 21:24:08.679177] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.170 [2024-11-28 21:24:08.679201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.738 21:24:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.738 21:24:09 -- common/autotest_common.sh@862 -- # return 0 00:16:45.738 21:24:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.738 21:24:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.738 21:24:09 -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 21:24:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.738 21:24:09 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:45.738 21:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.738 21:24:09 -- common/autotest_common.sh@10 -- # set +x 00:16:45.738 [2024-11-28 21:24:09.447692] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:45.738 21:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.738 21:24:09 -- host/digest.sh@104 -- # common_target_config 00:16:45.738 21:24:09 -- host/digest.sh@43 -- # rpc_cmd 00:16:45.738 21:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.738 21:24:09 -- common/autotest_common.sh@10 -- # set +x 00:16:45.997 null0 00:16:45.997 [2024-11-28 21:24:09.517605] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.997 [2024-11-28 21:24:09.541729] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.997 21:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.997 21:24:09 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:45.997 21:24:09 -- host/digest.sh@54 -- # local rw bs qd 00:16:45.997 21:24:09 -- host/digest.sh@56 -- # rw=randread 00:16:45.997 21:24:09 -- host/digest.sh@56 -- # bs=4096 00:16:45.997 21:24:09 -- host/digest.sh@56 -- # qd=128 00:16:45.997 21:24:09 -- host/digest.sh@58 -- # bperfpid=83478 00:16:45.997 21:24:09 -- host/digest.sh@60 -- # waitforlisten 83478 /var/tmp/bperf.sock 00:16:45.997 21:24:09 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:45.997 21:24:09 -- common/autotest_common.sh@829 -- # '[' -z 83478 ']' 00:16:45.997 21:24:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:45.997 21:24:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:45.997 21:24:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:45.997 21:24:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.997 21:24:09 -- common/autotest_common.sh@10 -- # set +x 00:16:45.997 [2024-11-28 21:24:09.593894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:45.997 [2024-11-28 21:24:09.594014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83478 ] 00:16:45.997 [2024-11-28 21:24:09.732450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.257 [2024-11-28 21:24:09.766205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.194 21:24:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.194 21:24:10 -- common/autotest_common.sh@862 -- # return 0 00:16:47.194 21:24:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:47.194 21:24:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:47.194 21:24:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:47.194 21:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.194 21:24:10 -- common/autotest_common.sh@10 -- # set +x 00:16:47.194 21:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.194 21:24:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:47.194 21:24:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:47.452 nvme0n1 00:16:47.452 21:24:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:47.452 21:24:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.453 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:16:47.453 21:24:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.453 21:24:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:47.453 21:24:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:47.712 Running I/O for 2 seconds... 00:16:47.712 [2024-11-28 21:24:11.310121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.310181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.310194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.325478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.325526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.325538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.340795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.340844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.340855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.355992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.356063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.356075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.371060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.371105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.371117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.386136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.386182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.386194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.401266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.401312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.401324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.416402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.416462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.416474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.431740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.431806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.431819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.712 [2024-11-28 21:24:11.447168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.712 [2024-11-28 21:24:11.447220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.712 [2024-11-28 21:24:11.447234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.465881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.465931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.465944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.482590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.482638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.482651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.498403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.498451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.498462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.514297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.514345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.514357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.530751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.530800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.530826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.547327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.547377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.547391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.563251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.563299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.563312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.578840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.578886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.578898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.594220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.594266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.594278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.609093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.609157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.609184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.624078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.624124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.624135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.638987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.639056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.639068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.654064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.654123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.668921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.668968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.668979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.683872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.683918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.683929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.972 [2024-11-28 21:24:11.701271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:47.972 [2024-11-28 21:24:11.701317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.972 [2024-11-28 21:24:11.701329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.232 [2024-11-28 21:24:11.719507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.232 [2024-11-28 21:24:11.719555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.232 [2024-11-28 21:24:11.719568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.232 [2024-11-28 21:24:11.737481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.232 [2024-11-28 21:24:11.737530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.232 [2024-11-28 21:24:11.737542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.232 [2024-11-28 21:24:11.754803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.232 [2024-11-28 21:24:11.754852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.754865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.771664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.771713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.771725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.787608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.787655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.787667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.804084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.804131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.804143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.819899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.819947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.834966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.835012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.835033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.850270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.850318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.850330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.865231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.865277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.865289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.880656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.880703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.880730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.895897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.895958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.910896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.910949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.910961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.927502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.927549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.927561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.943173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.943221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.943233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.958217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.958264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.958276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.233 [2024-11-28 21:24:11.973363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.233 [2024-11-28 21:24:11.973409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.233 [2024-11-28 21:24:11.973436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.492 [2024-11-28 21:24:11.989241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.492 [2024-11-28 21:24:11.989286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.492 [2024-11-28 21:24:11.989298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.492 [2024-11-28 21:24:12.004244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.492 [2024-11-28 21:24:12.004290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.492 [2024-11-28 21:24:12.004301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.492 [2024-11-28 21:24:12.019285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.492 [2024-11-28 21:24:12.019333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.492 [2024-11-28 21:24:12.019344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.492 [2024-11-28 21:24:12.034138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.492 [2024-11-28 21:24:12.034184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.492 [2024-11-28 21:24:12.034196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.492 [2024-11-28 21:24:12.049809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.049856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.049868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.066991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.067037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.067051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.083977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.084054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.084067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.099068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.099112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.099124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.114408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.114454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.114477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.131377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.131413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.131426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.147403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.147452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.147479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.162805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.162850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.162862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.177979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.178049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.178062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.193326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.193372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.193384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.208441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.208487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.208498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.493 [2024-11-28 21:24:12.223575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.493 [2024-11-28 21:24:12.223621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.493 [2024-11-28 21:24:12.223632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.239935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.239981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.239993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.255115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.255184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.255214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.270632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.270679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.270691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.285853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.285900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.285912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.308264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.308296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.308308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.323839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.323886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.323898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.339351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.339416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.354638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.354694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.354707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.371041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.371132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.371169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.386750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.386797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.386809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.401812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.401857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.401870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.416867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.416912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.416924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.432071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.432127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.447245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.447292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.447305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.462341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.462399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.477554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.477601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.477612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.754 [2024-11-28 21:24:12.493090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:48.754 [2024-11-28 21:24:12.493123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.754 [2024-11-28 21:24:12.493135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.014 [2024-11-28 21:24:12.509222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.509268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.509280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.524543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.524573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.524603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.539685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.539731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.539742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.554724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.554786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.554798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.571017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.571073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.571085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.588242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.588288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.588301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.604957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.605004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.605041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.622752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.622820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.622835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.639613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.639674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.639685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.656285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.672575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.672641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.672654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.688553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.688618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.688630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.704248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.704321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.704334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.720280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.720351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.720364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.736210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.736269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.736282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.015 [2024-11-28 21:24:12.753681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.015 [2024-11-28 21:24:12.753739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.015 [2024-11-28 21:24:12.753768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.770610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.770680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.785952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.786010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.786033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.801205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.801254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.801266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.816550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.816595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.816608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.834791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.834843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.834856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.853255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.853301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.853313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.872045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.872110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.872123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.890463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.890513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.890527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.907852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.907914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.907927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.924305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.924343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.924356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.940591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.940642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.940655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.957366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.957414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.957440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.973471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.973517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.973528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:12.988353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:12.988399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:12.988410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.275 [2024-11-28 21:24:13.003000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.275 [2024-11-28 21:24:13.003053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.275 [2024-11-28 21:24:13.003064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.535 [2024-11-28 21:24:13.018705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.535 [2024-11-28 21:24:13.018751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.535 [2024-11-28 21:24:13.018762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.535 [2024-11-28 21:24:13.033677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.535 [2024-11-28 21:24:13.033723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.535 [2024-11-28 21:24:13.033734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.535 [2024-11-28 21:24:13.048497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.535 [2024-11-28 21:24:13.048541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.535 [2024-11-28 21:24:13.048553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.063453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.063529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.063554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.079647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.079720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.079733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.096517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.096588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.096600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.112936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.113012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.113051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.129327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.129395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.129408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.144484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.144557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.144570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.159313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.159360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.159374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.174016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.174072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.174084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.188751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.188797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.188808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.203685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.203729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.203740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.218499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.218544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.218556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.233201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.233246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.233257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.247914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.247958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.247969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.262519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.262562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.536 [2024-11-28 21:24:13.277710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.536 [2024-11-28 21:24:13.277741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.536 [2024-11-28 21:24:13.277752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.795 [2024-11-28 21:24:13.292734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1814410) 00:16:49.795 [2024-11-28 21:24:13.292778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.795 [2024-11-28 21:24:13.292790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.795 00:16:49.796 Latency(us) 00:16:49.796 [2024-11-28T21:24:13.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.796 [2024-11-28T21:24:13.539Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:49.796 nvme0n1 : 2.01 16006.78 62.53 0.00 0.00 7991.00 6881.28 30146.56 00:16:49.796 [2024-11-28T21:24:13.539Z] =================================================================================================================== 00:16:49.796 [2024-11-28T21:24:13.539Z] Total : 16006.78 62.53 0.00 0.00 7991.00 6881.28 30146.56 00:16:49.796 0 00:16:49.796 21:24:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:49.796 21:24:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:49.796 21:24:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:49.796 21:24:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:49.796 | .driver_specific 00:16:49.796 | .nvme_error 00:16:49.796 | .status_code 00:16:49.796 | .command_transient_transport_error' 00:16:50.055 21:24:13 -- host/digest.sh@71 -- # (( 126 > 0 )) 00:16:50.055 21:24:13 -- host/digest.sh@73 -- # killprocess 83478 00:16:50.055 21:24:13 -- common/autotest_common.sh@936 -- # '[' -z 83478 ']' 00:16:50.055 21:24:13 -- common/autotest_common.sh@940 -- # kill -0 83478 00:16:50.055 21:24:13 -- common/autotest_common.sh@941 -- # uname 00:16:50.055 21:24:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.055 21:24:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83478 00:16:50.055 killing process with pid 83478 00:16:50.055 Received shutdown signal, test time was about 2.000000 seconds 00:16:50.055 00:16:50.055 Latency(us) 00:16:50.055 [2024-11-28T21:24:13.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.055 [2024-11-28T21:24:13.798Z] =================================================================================================================== 00:16:50.055 [2024-11-28T21:24:13.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.055 21:24:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:50.055 21:24:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:50.055 21:24:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83478' 00:16:50.055 21:24:13 -- common/autotest_common.sh@955 -- # kill 83478 00:16:50.055 21:24:13 -- common/autotest_common.sh@960 -- # wait 83478 00:16:50.055 21:24:13 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:50.055 21:24:13 -- host/digest.sh@54 -- # local rw bs qd 00:16:50.055 21:24:13 -- host/digest.sh@56 -- # rw=randread 00:16:50.055 21:24:13 -- host/digest.sh@56 -- # bs=131072 00:16:50.055 21:24:13 -- host/digest.sh@56 -- # qd=16 00:16:50.055 21:24:13 -- host/digest.sh@58 -- # bperfpid=83536 00:16:50.055 21:24:13 -- host/digest.sh@60 -- # waitforlisten 83536 /var/tmp/bperf.sock 00:16:50.055 21:24:13 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:50.055 21:24:13 -- common/autotest_common.sh@829 -- # '[' -z 83536 ']' 00:16:50.055 21:24:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:50.055 21:24:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:50.055 21:24:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:50.055 21:24:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.055 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:16:50.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:50.056 Zero copy mechanism will not be used. 00:16:50.056 [2024-11-28 21:24:13.795213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:50.056 [2024-11-28 21:24:13.795297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83536 ] 00:16:50.315 [2024-11-28 21:24:13.930345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.315 [2024-11-28 21:24:13.962872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.250 21:24:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.250 21:24:14 -- common/autotest_common.sh@862 -- # return 0 00:16:51.250 21:24:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:51.250 21:24:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:51.509 21:24:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:51.509 21:24:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.509 21:24:14 -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 21:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.509 21:24:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.509 21:24:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.769 nvme0n1 00:16:51.769 21:24:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:51.769 21:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.769 21:24:15 -- common/autotest_common.sh@10 -- # set +x 00:16:51.769 21:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.769 21:24:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:51.769 21:24:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:51.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.769 Zero copy mechanism will not be used. 00:16:51.769 Running I/O for 2 seconds... 00:16:51.769 [2024-11-28 21:24:15.410901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.410980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.415271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.415311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.415325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.419599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.419647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.419659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.423777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.423825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.423836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.427922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.427969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.427982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.431969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.432042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.432056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.436178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.436225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.436237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.440204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.440264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.444323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.444371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.444383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.448490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.448537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.448549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.452667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.452715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.452726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.456862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.456908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.456920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.461000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.461073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.461085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.465271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.465303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.465315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.469936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.469998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.470010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.474173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.474219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.474230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.478284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.478331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.478343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.482358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.482404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.482416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.486466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.486513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.486525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.490701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.490767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.490780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.494741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.494788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.494800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.498985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.499042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.499054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.502985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.503042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.503054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.769 [2024-11-28 21:24:15.507306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:51.769 [2024-11-28 21:24:15.507341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.769 [2024-11-28 21:24:15.507355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.511744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.030 [2024-11-28 21:24:15.511794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.030 [2024-11-28 21:24:15.511806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.516192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.030 [2024-11-28 21:24:15.516238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.030 [2024-11-28 21:24:15.516250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.520315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.030 [2024-11-28 21:24:15.520361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.030 [2024-11-28 21:24:15.520373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.524350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.030 [2024-11-28 21:24:15.524397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.030 [2024-11-28 21:24:15.524408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.528344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.030 [2024-11-28 21:24:15.528392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.030 [2024-11-28 21:24:15.528404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.532417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.030 [2024-11-28 21:24:15.532463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.030 [2024-11-28 21:24:15.532475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.030 [2024-11-28 21:24:15.536477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.536524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.536536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.540613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.540659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.540671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.544723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.544769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.544781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.548922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.548969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.548981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.552968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.553041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.553054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.557069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.557116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.557128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.561491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.561539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.561551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.565896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.565953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.565966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.570143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.570189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.574285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.574332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.574344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.578405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.578451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.578462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.582511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.582559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.582571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.586690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.586737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.586749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.590995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.591066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.591079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.595326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.595361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.595375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.599377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.599426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.599439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.603437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.603486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.603517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.607598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.607644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.607656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.611751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.611798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.611810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.615818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.615865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.615877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.619871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.619918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.619930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.623965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.624012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.031 [2024-11-28 21:24:15.624034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.031 [2024-11-28 21:24:15.628014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.031 [2024-11-28 21:24:15.628070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.628082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.632099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.632144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.632156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.636179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.636226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.636237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.640275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.640322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.640333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.644301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.644362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.644374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.648392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.648439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.648450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.652501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.652547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.652559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.656614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.656661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.656673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.660696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.660759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.660772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.664807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.664854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.664866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.668964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.669012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.669035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.672999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.673056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.673068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.677151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.677196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.677208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.681232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.681278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.681290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.685338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.685385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.685396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.689401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.689448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.689459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.693584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.693645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.693657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.697674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.697720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.697731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.701794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.701841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.701853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.705785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.705831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.705843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.709776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.032 [2024-11-28 21:24:15.709822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.032 [2024-11-28 21:24:15.709834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.032 [2024-11-28 21:24:15.713827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.713873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.713885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.717833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.717879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.717891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.721897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.721943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.721954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.726460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.726506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.726518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.731057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.731113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.731125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.734982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.735039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.735051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.739031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.739088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.743055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.743102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.743113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.747347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.747381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.747394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.751424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.751471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.751498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.755702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.755749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.755761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.759741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.759788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.759800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.763802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.763848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.763860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.033 [2024-11-28 21:24:15.768326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.033 [2024-11-28 21:24:15.768377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.033 [2024-11-28 21:24:15.768389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.773030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.773120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.773148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.777493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.777542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.777555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.782123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.782170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.782183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.786270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.786317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.786329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.790449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.790496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.790509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.794859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.794906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.794918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.799482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.799531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.799544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.804136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.804185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.804199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.808757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.808805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.808817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.813471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.813518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.813529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.817907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.817955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.817967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.822528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.822575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.822586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.826900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.826947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.826959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.831386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.831436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.831464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.835783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.835830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.835843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.839843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.839889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.839901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.844000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.844056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.844069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.848207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.848253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.848263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.852273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.852319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.852331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.856421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.856468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.856479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.860521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.293 [2024-11-28 21:24:15.860569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.293 [2024-11-28 21:24:15.860581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.293 [2024-11-28 21:24:15.864628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.864675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.864686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.868664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.868711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.868739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.872943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.872991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.873003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.877050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.877097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.877108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.881109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.881170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.881182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.885222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.885268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.885280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.889230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.889276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.889288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.893346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.893394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.893405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.897501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.897548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.897560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.901644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.901691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.901703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.905813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.905860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.905872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.909882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.909929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.909940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.913981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.914053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.914066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.918144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.918190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.918202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.922134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.922180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.922192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.926201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.926248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.926260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.930338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.930386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.930398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.934365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.934412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.934438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.938376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.938423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.938449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.942530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.942577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.942589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.946625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.946672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.946684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.950650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.950696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.950708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.954768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.954815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.954827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.958977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.959035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.959047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.963062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.963107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.963118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.967281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.967315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.967327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.294 [2024-11-28 21:24:15.971274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.294 [2024-11-28 21:24:15.971322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.294 [2024-11-28 21:24:15.971334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:15.975373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:15.975423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:15.975450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:15.979911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:15.979964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:15.979978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:15.984999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:15.985058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:15.985072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:15.990167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:15.990216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:15.990228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:15.994481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:15.994528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:15.994540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:15.998810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:15.998859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:15.998871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.002906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.002954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.002966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.006970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.007029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.007043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.011107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.011177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.011206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.015227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.015260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.015272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.019289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.019338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.019351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.023705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.023753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.023765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.028097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.028144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.028157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.295 [2024-11-28 21:24:16.032805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.295 [2024-11-28 21:24:16.032855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.295 [2024-11-28 21:24:16.032868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.037660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.037706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.037719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.042395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.042426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.042438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.046776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.046824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.046837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.051192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.051226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.051239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.055584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.055630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.055642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.060065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.060125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.064206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.555 [2024-11-28 21:24:16.064252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.555 [2024-11-28 21:24:16.064265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.555 [2024-11-28 21:24:16.068329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.068376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.068389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.072911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.072960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.072972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.077171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.077217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.081394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.081440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.081453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.085859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.085907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.085919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.090135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.090183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.090196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.094300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.094348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.094360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.098595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.098643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.098654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.102821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.102869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.102881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.107024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.107082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.107111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.111112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.111183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.111197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.115123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.115177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.115190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.119312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.119348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.119361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.123427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.123462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.123476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.127584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.127631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.127643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.131785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.131820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.131833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.135904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.135951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.135963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.140174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.140220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.140232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.144388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.144436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.144448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.148632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.148680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.148692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.152867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.152915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.152927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.157266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.157314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.157325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.161467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.161515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.161527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.165571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.556 [2024-11-28 21:24:16.165617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.556 [2024-11-28 21:24:16.165628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.556 [2024-11-28 21:24:16.170093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.170152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.174271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.174319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.174331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.178590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.178637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.178648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.182599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.182629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.182657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.186635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.186681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.186693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.190919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.190968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.190980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.195612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.195659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.195670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.200278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.200324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.200336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.205032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.205126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.205153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.209668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.209715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.209744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.214538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.214584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.214595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.219360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.219408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.219421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.224204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.224250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.224261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.228850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.228901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.228914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.233792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.233828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.233842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.238331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.238392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.238404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.243168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.243201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.243214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.248000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.248046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.248060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.252537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.252585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.252597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.257322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.257368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.257381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.261903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.261939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.261952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.266724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.266787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.266800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.271311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.271346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.271359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.275932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.275967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.275979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.280500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.280548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.280560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.285119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.285166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.285177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.557 [2024-11-28 21:24:16.289551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.557 [2024-11-28 21:24:16.289598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.557 [2024-11-28 21:24:16.289611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.558 [2024-11-28 21:24:16.294274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.558 [2024-11-28 21:24:16.294308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.558 [2024-11-28 21:24:16.294321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.298823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.298870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.298882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.303395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.303430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.303444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.307727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.307775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.307787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.311984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.312041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.312053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.316401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.316448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.316459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.320690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.320740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.320752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.325235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.325285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.325298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.329634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.329683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.329696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.333927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.333974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.333986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.338229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.338276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.338289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.342611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.342659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.342671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.346808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.346856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.346868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.350978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.351040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.351053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.355339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.355373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.355386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.359453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.359528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.359540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.363706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.363753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.363765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.368050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.818 [2024-11-28 21:24:16.368107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.818 [2024-11-28 21:24:16.368120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.818 [2024-11-28 21:24:16.372214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.372261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.372273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.376359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.376420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.376432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.380548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.380594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.380606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.384769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.384816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.384829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.388986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.389044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.389057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.393404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.393450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.393462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.397657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.397703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.397730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.401910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.401958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.401970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.406218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.406265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.410264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.410311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.410322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.414378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.414425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.414450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.418551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.418597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.418609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.422711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.422756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.422769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.426755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.426801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.426813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.430837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.430884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.430897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.434934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.434980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.438982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.439039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.439052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.442986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.443043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.443056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.447046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.447092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.447104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.451091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.451136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.451171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.455171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.455219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.455231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.459280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.459314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.459326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.463312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.463360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.467418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.467466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.819 [2024-11-28 21:24:16.467493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.819 [2024-11-28 21:24:16.471528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.819 [2024-11-28 21:24:16.471574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.471585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.475667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.475714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.475737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.479690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.479752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.479763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.483678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.483741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.483753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.487782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.487829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.487841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.491809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.491872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.491883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.495870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.495916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.495929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.500088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.500150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.500162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.504156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.504202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.504213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.508213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.508259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.508270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.512307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.512366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.516717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.516782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.521200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.521245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.525557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.525603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.525615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.529837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.529885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.534152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.534198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.534210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.538290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.538336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.538349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.542304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.542350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.542362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.546460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.546506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.546518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.550829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.550862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.550874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.820 [2024-11-28 21:24:16.555362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:52.820 [2024-11-28 21:24:16.555397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.820 [2024-11-28 21:24:16.555410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.559989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.560047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.560060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.564188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.564234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.564246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.568507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.568553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.568565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.572625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.572671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.572683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.576706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.576752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.576764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.580862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.580908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.580919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.585019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.585065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.585077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.589069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.589114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.589125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.593163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.081 [2024-11-28 21:24:16.593209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.081 [2024-11-28 21:24:16.593220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.081 [2024-11-28 21:24:16.597289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.597335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.601394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.601440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.601451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.605512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.605558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.605570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.609526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.609583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.613722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.613769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.613781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.617859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.617906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.617918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.622004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.622076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.626006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.626079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.626091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.630055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.630102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.630129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.634134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.634182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.634194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.638200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.638247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.638259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.642210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.642257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.642269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.646452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.646498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.646510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.650463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.650510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.650522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.654611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.654657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.654669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.658618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.658649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.658677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.662766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.662813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.662825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.666880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.666928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.666939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.671291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.671325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.671338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.675632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.675679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.675691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.680044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.680103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.680115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.684401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.684462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.684474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.688424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.688470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.688481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.692611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.692658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.082 [2024-11-28 21:24:16.692670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.082 [2024-11-28 21:24:16.696700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.082 [2024-11-28 21:24:16.696746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.696758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.700758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.700804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.700816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.704838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.704884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.704896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.709067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.709113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.709124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.713064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.713109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.713121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.717062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.717107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.717118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.720990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.721046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.721059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.725019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.725064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.725075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.729069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.729115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.729127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.733215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.733260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.733272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.737415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.737462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.737474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.741552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.741598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.741610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.745768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.745815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.745827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.749776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.749835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.749848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.753903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.753953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.758035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.758082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.758094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.762299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.762345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.762357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.766367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.766414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.766425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.770777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.770827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.770839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.775123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.775192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.775206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.779363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.779413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.779427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.783684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.783742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.783754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.788086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.788145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.788158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.792340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.792387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.792399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.796617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.796663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.796675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.800948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.800995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.801007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.083 [2024-11-28 21:24:16.805090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.083 [2024-11-28 21:24:16.805137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.083 [2024-11-28 21:24:16.805149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.084 [2024-11-28 21:24:16.809388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.084 [2024-11-28 21:24:16.809436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.084 [2024-11-28 21:24:16.809464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.084 [2024-11-28 21:24:16.814169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.084 [2024-11-28 21:24:16.814218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.084 [2024-11-28 21:24:16.814245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.084 [2024-11-28 21:24:16.818915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.084 [2024-11-28 21:24:16.818964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.084 [2024-11-28 21:24:16.818976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.343 [2024-11-28 21:24:16.823710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.823759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.823772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.828277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.828342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.828355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.833076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.833136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.833150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.837722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.837769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.837780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.842236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.842285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.842298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.846846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.846895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.846909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.851290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.851325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.851339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.855724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.855770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.855782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.859899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.859944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.859956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.864053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.864112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.868069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.868127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.868139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.872249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.872296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.872308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.876427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.876487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.880644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.880691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.880703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.884884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.884931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.884944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.889135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.889181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.889193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.893300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.893346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.893360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.897468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.897516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.897528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.901668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.901715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.901726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.905791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.905837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.905849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.909939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.909986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.909997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.913957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.914003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.914025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.917953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.918000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.918011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.921972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.922026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.922040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.925987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.926042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.926054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.930051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.930096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.930108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.934087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.934132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.934144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.938158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.938204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.938215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.942171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.942216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.942228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.946246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.946293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.946305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.950248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.950294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.950306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.954159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.954204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.954216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.344 [2024-11-28 21:24:16.958341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.344 [2024-11-28 21:24:16.958388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.344 [2024-11-28 21:24:16.958400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.962373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.962428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.962440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.966388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.966434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.966446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.970572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.970620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.974590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.974636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.974649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.978844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.978890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.978902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.983113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.983185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.983223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.987594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.987642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.987654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.992081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.992137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.992149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:16.996307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:16.996354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:16.996365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.000640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.000687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.000716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.004848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.004907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.009037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.009083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.009095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.013179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.013225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.013237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.017268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.017313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.017325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.021373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.021420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.021432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.025402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.025447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.025458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.029520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.029566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.029579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.033720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.033766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.033778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.037871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.037919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.037931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.041983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.042038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.042050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.046002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.046057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.046069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.050071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.050120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.050132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.054189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.054236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.054247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.058338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.058386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.058397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.062344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.062391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.062402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.066454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.066501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.066513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.070822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.070868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.070880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.075538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.075585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.075598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.079705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.079753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.079766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.345 [2024-11-28 21:24:17.084271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.345 [2024-11-28 21:24:17.084336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.345 [2024-11-28 21:24:17.084348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.088638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.088685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.088713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.093329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.093376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.093388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.097566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.097613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.097625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.101791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.101852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.105910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.105956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.105968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.110532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.110595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.110608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.114974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.115035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.115049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.119227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.119261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.119274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.123441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.606 [2024-11-28 21:24:17.123476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.606 [2024-11-28 21:24:17.123509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.606 [2024-11-28 21:24:17.127834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.127881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.127893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.132255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.132303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.132315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.136598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.136647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.136660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.140867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.140916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.140929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.145158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.145205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.145217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.149340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.149386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.153415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.153462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.153474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.157534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.157581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.157593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.161596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.161642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.161654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.165645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.165691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.165702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.169776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.169823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.169834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.174264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.174312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.178723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.178805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.178818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.183445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.183495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.183508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.187986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.188066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.188095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.192638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.192686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.192698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.197126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.197174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.197189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.201464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.201513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.201524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.205701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.205749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.210005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.210078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.210091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.214159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.214205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.214217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.218198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.218245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.218257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.222543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.222590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.222602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.226659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.226706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.226718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.230763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.230822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.235221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.235253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.235265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.239324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.239373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.607 [2024-11-28 21:24:17.239387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.607 [2024-11-28 21:24:17.243359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.607 [2024-11-28 21:24:17.243392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.243405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.247825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.247871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.247883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.252030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.252089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.252101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.256134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.256196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.256210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.260450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.260498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.260510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.264585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.264632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.264645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.268748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.268795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.268823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.273163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.273211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.273222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.277381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.277427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.277439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.281513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.281560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.281572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.285856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.285904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.285916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.290113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.290159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.290171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.294343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.294389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.294400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.298635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.298682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.298694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.302688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.302735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.302748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.306876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.306923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.306935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.311092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.311138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.311191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.315240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.315273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.315285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.319245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.319293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.319306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.323716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.323763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.323775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.328390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.328436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.328447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.333247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.333294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.333323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.337531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.337578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.337589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.341632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.341678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.341690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.608 [2024-11-28 21:24:17.346316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.608 [2024-11-28 21:24:17.346376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.608 [2024-11-28 21:24:17.346388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.350689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.350735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.350747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.355186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.355220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.355233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.359305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.359339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.359353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.363584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.363630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.363641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.367731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.367778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.367789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.371926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.371972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.371984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.376034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.376091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.376103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.380125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.380169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.380180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.384232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.384280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.384292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.388327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.388375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.388387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.392480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.392527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.392539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.397166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.397217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.397230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.868 [2024-11-28 21:24:17.401833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb5d5b0) 00:16:53.868 [2024-11-28 21:24:17.401884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-11-28 21:24:17.401898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.868 00:16:53.868 Latency(us) 00:16:53.868 [2024-11-28T21:24:17.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.868 [2024-11-28T21:24:17.611Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:53.868 nvme0n1 : 2.00 7274.03 909.25 0.00 0.00 2196.25 1742.66 9949.56 00:16:53.868 [2024-11-28T21:24:17.611Z] =================================================================================================================== 00:16:53.868 [2024-11-28T21:24:17.611Z] Total : 7274.03 909.25 0.00 0.00 2196.25 1742.66 9949.56 00:16:53.868 0 00:16:53.868 21:24:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:53.868 21:24:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:53.868 21:24:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:53.868 21:24:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:53.868 | .driver_specific 00:16:53.868 | .nvme_error 00:16:53.868 | .status_code 00:16:53.868 | .command_transient_transport_error' 00:16:54.141 21:24:17 -- host/digest.sh@71 -- # (( 470 > 0 )) 00:16:54.141 21:24:17 -- host/digest.sh@73 -- # killprocess 83536 00:16:54.141 21:24:17 -- common/autotest_common.sh@936 -- # '[' -z 83536 ']' 00:16:54.141 21:24:17 -- common/autotest_common.sh@940 -- # kill -0 83536 00:16:54.141 21:24:17 -- common/autotest_common.sh@941 -- # uname 00:16:54.141 21:24:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.141 21:24:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83536 00:16:54.141 killing process with pid 83536 00:16:54.141 Received shutdown signal, test time was about 2.000000 seconds 00:16:54.141 00:16:54.141 Latency(us) 00:16:54.141 [2024-11-28T21:24:17.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.141 [2024-11-28T21:24:17.884Z] =================================================================================================================== 00:16:54.141 [2024-11-28T21:24:17.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.141 21:24:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:54.141 21:24:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:54.141 21:24:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83536' 00:16:54.141 21:24:17 -- common/autotest_common.sh@955 -- # kill 83536 00:16:54.141 21:24:17 -- common/autotest_common.sh@960 -- # wait 83536 00:16:54.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:54.425 21:24:17 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:54.425 21:24:17 -- host/digest.sh@54 -- # local rw bs qd 00:16:54.425 21:24:17 -- host/digest.sh@56 -- # rw=randwrite 00:16:54.425 21:24:17 -- host/digest.sh@56 -- # bs=4096 00:16:54.425 21:24:17 -- host/digest.sh@56 -- # qd=128 00:16:54.425 21:24:17 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:54.425 21:24:17 -- host/digest.sh@58 -- # bperfpid=83602 00:16:54.425 21:24:17 -- host/digest.sh@60 -- # waitforlisten 83602 /var/tmp/bperf.sock 00:16:54.425 21:24:17 -- common/autotest_common.sh@829 -- # '[' -z 83602 ']' 00:16:54.425 21:24:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:54.425 21:24:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.425 21:24:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:54.425 21:24:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.425 21:24:17 -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 [2024-11-28 21:24:17.941516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:54.425 [2024-11-28 21:24:17.941601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83602 ] 00:16:54.425 [2024-11-28 21:24:18.070605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.425 [2024-11-28 21:24:18.103735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.684 21:24:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.684 21:24:18 -- common/autotest_common.sh@862 -- # return 0 00:16:54.684 21:24:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:54.684 21:24:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:54.942 21:24:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:54.942 21:24:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.942 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:16:54.942 21:24:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.942 21:24:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:54.942 21:24:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.201 nvme0n1 00:16:55.201 21:24:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:55.201 21:24:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.201 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:16:55.201 21:24:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.201 21:24:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:55.201 21:24:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:55.201 Running I/O for 2 seconds... 00:16:55.201 [2024-11-28 21:24:18.908120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ddc00 00:16:55.201 [2024-11-28 21:24:18.909484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.202 [2024-11-28 21:24:18.909524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.202 [2024-11-28 21:24:18.923086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fef90 00:16:55.202 [2024-11-28 21:24:18.924521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.202 [2024-11-28 21:24:18.924568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.202 [2024-11-28 21:24:18.937844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ff3c8 00:16:55.202 [2024-11-28 21:24:18.939216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.202 [2024-11-28 21:24:18.939262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:18.953553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190feb58 00:16:55.461 [2024-11-28 21:24:18.954835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:18.954881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:18.968747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fe720 00:16:55.461 [2024-11-28 21:24:18.970047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:18.970117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:18.983250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fe2e8 00:16:55.461 [2024-11-28 21:24:18.984620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:18.984664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:18.997860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fdeb0 00:16:55.461 [2024-11-28 21:24:18.999197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:18.999243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:19.012419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fda78 00:16:55.461 [2024-11-28 21:24:19.013711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:19.013754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:19.026802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fd640 00:16:55.461 [2024-11-28 21:24:19.028203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:19.028249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:19.041273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fd208 00:16:55.461 [2024-11-28 21:24:19.042511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:19.042556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:19.055719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fcdd0 00:16:55.461 [2024-11-28 21:24:19.056986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:19.057056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:19.071069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fc998 00:16:55.461 [2024-11-28 21:24:19.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.461 [2024-11-28 21:24:19.072485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:55.461 [2024-11-28 21:24:19.086644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fc560 00:16:55.462 [2024-11-28 21:24:19.087930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.087976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.101348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fc128 00:16:55.462 [2024-11-28 21:24:19.102554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.102598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.115777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fbcf0 00:16:55.462 [2024-11-28 21:24:19.117073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.117158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.130350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fb8b8 00:16:55.462 [2024-11-28 21:24:19.131590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.131634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.144899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fb480 00:16:55.462 [2024-11-28 21:24:19.146124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.146168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.160078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fb048 00:16:55.462 [2024-11-28 21:24:19.161412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.161474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.174938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fac10 00:16:55.462 [2024-11-28 21:24:19.176162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.176205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:55.462 [2024-11-28 21:24:19.189419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fa7d8 00:16:55.462 [2024-11-28 21:24:19.190572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.462 [2024-11-28 21:24:19.190616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.204678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190fa3a0 00:16:55.721 [2024-11-28 21:24:19.205901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.721 [2024-11-28 21:24:19.205945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.219513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f9f68 00:16:55.721 [2024-11-28 21:24:19.220661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.721 [2024-11-28 21:24:19.220721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.234131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f9b30 00:16:55.721 [2024-11-28 21:24:19.235304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.721 [2024-11-28 21:24:19.235352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.248546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f96f8 00:16:55.721 [2024-11-28 21:24:19.249714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.721 [2024-11-28 21:24:19.249757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.262982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f92c0 00:16:55.721 [2024-11-28 21:24:19.264154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.721 [2024-11-28 21:24:19.264198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.279283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f8e88 00:16:55.721 [2024-11-28 21:24:19.280476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.721 [2024-11-28 21:24:19.280520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:55.721 [2024-11-28 21:24:19.296158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f8a50 00:16:55.721 [2024-11-28 21:24:19.297322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.297367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.312746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f8618 00:16:55.722 [2024-11-28 21:24:19.313916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.313949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.329288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f81e0 00:16:55.722 [2024-11-28 21:24:19.330483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.330527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.345820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f7da8 00:16:55.722 [2024-11-28 21:24:19.347030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.347100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.361670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f7970 00:16:55.722 [2024-11-28 21:24:19.362852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.362897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.376558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f7538 00:16:55.722 [2024-11-28 21:24:19.377637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.377679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.390934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f7100 00:16:55.722 [2024-11-28 21:24:19.392063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.392113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.405490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f6cc8 00:16:55.722 [2024-11-28 21:24:19.406525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.406569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.420762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f6890 00:16:55.722 [2024-11-28 21:24:19.421904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.421933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.435504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f6458 00:16:55.722 [2024-11-28 21:24:19.436580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.436624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:55.722 [2024-11-28 21:24:19.450052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f6020 00:16:55.722 [2024-11-28 21:24:19.451070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.722 [2024-11-28 21:24:19.451132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.465283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f5be8 00:16:55.982 [2024-11-28 21:24:19.466358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.466388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.479988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f57b0 00:16:55.982 [2024-11-28 21:24:19.481056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.481108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.495928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f5378 00:16:55.982 [2024-11-28 21:24:19.496985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.497053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.511656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f4f40 00:16:55.982 [2024-11-28 21:24:19.512677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.512736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.527253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f4b08 00:16:55.982 [2024-11-28 21:24:19.528315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.528359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.542281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f46d0 00:16:55.982 [2024-11-28 21:24:19.543316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.543362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.557230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f4298 00:16:55.982 [2024-11-28 21:24:19.558226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.558272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.572012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f3e60 00:16:55.982 [2024-11-28 21:24:19.572974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.573026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.586976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f3a28 00:16:55.982 [2024-11-28 21:24:19.587978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.588035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.602561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f35f0 00:16:55.982 [2024-11-28 21:24:19.603581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.618681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f31b8 00:16:55.982 [2024-11-28 21:24:19.619679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.619723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.633472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f2d80 00:16:55.982 [2024-11-28 21:24:19.634399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.634443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.648595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f2948 00:16:55.982 [2024-11-28 21:24:19.649547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.649590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.662994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f2510 00:16:55.982 [2024-11-28 21:24:19.663944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.664002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.678091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f20d8 00:16:55.982 [2024-11-28 21:24:19.679048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.679086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.693504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f1ca0 00:16:55.982 [2024-11-28 21:24:19.694443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.694486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:55.982 [2024-11-28 21:24:19.710264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f1868 00:16:55.982 [2024-11-28 21:24:19.711213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.982 [2024-11-28 21:24:19.711253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.727488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f1430 00:16:56.242 [2024-11-28 21:24:19.728388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.728464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.743259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f0ff8 00:16:56.242 [2024-11-28 21:24:19.744159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.744202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.758342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f0bc0 00:16:56.242 [2024-11-28 21:24:19.759185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.759258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.773409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f0788 00:16:56.242 [2024-11-28 21:24:19.774255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.774304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.788602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190f0350 00:16:56.242 [2024-11-28 21:24:19.789400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.789445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.803777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eff18 00:16:56.242 [2024-11-28 21:24:19.804614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.804659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.819366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190efae0 00:16:56.242 [2024-11-28 21:24:19.820231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.820292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.834862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ef6a8 00:16:56.242 [2024-11-28 21:24:19.835643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.835688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.850004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ef270 00:16:56.242 [2024-11-28 21:24:19.850760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.850792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.864503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eee38 00:16:56.242 [2024-11-28 21:24:19.865256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.865286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.880021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eea00 00:16:56.242 [2024-11-28 21:24:19.880780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.880826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.896634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ee5c8 00:16:56.242 [2024-11-28 21:24:19.897427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.897485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.912515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ee190 00:16:56.242 [2024-11-28 21:24:19.913241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.913268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.927751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190edd58 00:16:56.242 [2024-11-28 21:24:19.928421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.928452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.943355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ed920 00:16:56.242 [2024-11-28 21:24:19.944073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.242 [2024-11-28 21:24:19.944116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:56.242 [2024-11-28 21:24:19.957879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ed4e8 00:16:56.243 [2024-11-28 21:24:19.958590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.243 [2024-11-28 21:24:19.958635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:56.243 [2024-11-28 21:24:19.972820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ed0b0 00:16:56.243 [2024-11-28 21:24:19.973572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.243 [2024-11-28 21:24:19.973602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:19.988500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ecc78 00:16:56.502 [2024-11-28 21:24:19.989215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:19.989246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.003413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ec840 00:16:56.502 [2024-11-28 21:24:20.004069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.004116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.020637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ec408 00:16:56.502 [2024-11-28 21:24:20.021279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.021317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.036705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ebfd0 00:16:56.502 [2024-11-28 21:24:20.037332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.037366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.051306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ebb98 00:16:56.502 [2024-11-28 21:24:20.051919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.051950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.065856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eb760 00:16:56.502 [2024-11-28 21:24:20.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.066489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.080266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eb328 00:16:56.502 [2024-11-28 21:24:20.080815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.080846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.094534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eaef0 00:16:56.502 [2024-11-28 21:24:20.095113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.095166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.108903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190eaab8 00:16:56.502 [2024-11-28 21:24:20.109450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.109480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.123388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ea680 00:16:56.502 [2024-11-28 21:24:20.123946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.123976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.137695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190ea248 00:16:56.502 [2024-11-28 21:24:20.138262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.138293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.152049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e9e10 00:16:56.502 [2024-11-28 21:24:20.152564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.152604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.166481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e99d8 00:16:56.502 [2024-11-28 21:24:20.166973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.167010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.180955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e95a0 00:16:56.502 [2024-11-28 21:24:20.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.181467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:56.502 [2024-11-28 21:24:20.195674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e9168 00:16:56.502 [2024-11-28 21:24:20.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.502 [2024-11-28 21:24:20.196205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:56.503 [2024-11-28 21:24:20.211666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e8d30 00:16:56.503 [2024-11-28 21:24:20.212246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.503 [2024-11-28 21:24:20.212274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:56.503 [2024-11-28 21:24:20.227887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e88f8 00:16:56.503 [2024-11-28 21:24:20.228417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.503 [2024-11-28 21:24:20.228458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:56.503 [2024-11-28 21:24:20.243851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e84c0 00:16:56.762 [2024-11-28 21:24:20.244384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.244424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.258666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e8088 00:16:56.763 [2024-11-28 21:24:20.259171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.259198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.273028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e7c50 00:16:56.763 [2024-11-28 21:24:20.273437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.273478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.287415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e7818 00:16:56.763 [2024-11-28 21:24:20.287860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.287901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.301620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e73e0 00:16:56.763 [2024-11-28 21:24:20.302046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.302095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.315931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e6fa8 00:16:56.763 [2024-11-28 21:24:20.316306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.316346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.330253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e6b70 00:16:56.763 [2024-11-28 21:24:20.330637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.330662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.344749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e6738 00:16:56.763 [2024-11-28 21:24:20.345124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.345149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.359023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e6300 00:16:56.763 [2024-11-28 21:24:20.359449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.359505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.373436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e5ec8 00:16:56.763 [2024-11-28 21:24:20.373764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.387812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e5a90 00:16:56.763 [2024-11-28 21:24:20.388149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.388174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.402034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e5658 00:16:56.763 [2024-11-28 21:24:20.402377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.402416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.418072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e5220 00:16:56.763 [2024-11-28 21:24:20.418421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.418445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.434746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e4de8 00:16:56.763 [2024-11-28 21:24:20.435125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.435182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.451332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e49b0 00:16:56.763 [2024-11-28 21:24:20.451657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.451710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.468333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e4578 00:16:56.763 [2024-11-28 21:24:20.468628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.468652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.485102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e4140 00:16:56.763 [2024-11-28 21:24:20.485375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.485402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:56.763 [2024-11-28 21:24:20.501452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e3d08 00:16:56.763 [2024-11-28 21:24:20.501752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.763 [2024-11-28 21:24:20.501781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.517796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e38d0 00:16:57.023 [2024-11-28 21:24:20.518145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.518171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.532392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e3498 00:16:57.023 [2024-11-28 21:24:20.532634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.532658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.546557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e3060 00:16:57.023 [2024-11-28 21:24:20.546789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.546813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.561114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e2c28 00:16:57.023 [2024-11-28 21:24:20.561343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.561367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.575304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e27f0 00:16:57.023 [2024-11-28 21:24:20.575561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.575585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.589433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e23b8 00:16:57.023 [2024-11-28 21:24:20.589643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.589662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.603748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e1f80 00:16:57.023 [2024-11-28 21:24:20.603949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.603968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.617930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e1b48 00:16:57.023 [2024-11-28 21:24:20.618170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.618207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.632194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e1710 00:16:57.023 [2024-11-28 21:24:20.632377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.632396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.647098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e12d8 00:16:57.023 [2024-11-28 21:24:20.647302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.647324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.662784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e0ea0 00:16:57.023 [2024-11-28 21:24:20.662952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.662973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.677872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e0a68 00:16:57.023 [2024-11-28 21:24:20.678084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.678105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.692784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e0630 00:16:57.023 [2024-11-28 21:24:20.692938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.692958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.707605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190e01f8 00:16:57.023 [2024-11-28 21:24:20.707750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.707770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.723631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190dfdc0 00:16:57.023 [2024-11-28 21:24:20.723783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.723803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.738417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190df988 00:16:57.023 [2024-11-28 21:24:20.738559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.023 [2024-11-28 21:24:20.738585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:57.023 [2024-11-28 21:24:20.753608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190df550 00:16:57.023 [2024-11-28 21:24:20.753759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.024 [2024-11-28 21:24:20.753781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.769830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190df118 00:16:57.284 [2024-11-28 21:24:20.769946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.769969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.785234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190dece0 00:16:57.284 [2024-11-28 21:24:20.785334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.785355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.800597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190de8a8 00:16:57.284 [2024-11-28 21:24:20.800703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.800723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.814984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190de038 00:16:57.284 [2024-11-28 21:24:20.815083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.815103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.835344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190de038 00:16:57.284 [2024-11-28 21:24:20.836704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.836751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.851555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190de470 00:16:57.284 [2024-11-28 21:24:20.853035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.853131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.867538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190de8a8 00:16:57.284 [2024-11-28 21:24:20.868922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.868968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:57.284 [2024-11-28 21:24:20.882815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc22160) with pdu=0x2000190dece0 00:16:57.284 [2024-11-28 21:24:20.884216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.284 [2024-11-28 21:24:20.884259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:57.284 00:16:57.284 Latency(us) 00:16:57.284 [2024-11-28T21:24:21.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.284 [2024-11-28T21:24:21.027Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.284 nvme0n1 : 2.00 16754.41 65.45 0.00 0.00 7633.01 6821.70 22163.08 00:16:57.284 [2024-11-28T21:24:21.027Z] =================================================================================================================== 00:16:57.284 [2024-11-28T21:24:21.027Z] Total : 16754.41 65.45 0.00 0.00 7633.01 6821.70 22163.08 00:16:57.284 0 00:16:57.284 21:24:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:57.284 21:24:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:57.284 21:24:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:57.284 21:24:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:57.284 | .driver_specific 00:16:57.284 | .nvme_error 00:16:57.284 | .status_code 00:16:57.284 | .command_transient_transport_error' 00:16:57.544 21:24:21 -- host/digest.sh@71 -- # (( 131 > 0 )) 00:16:57.544 21:24:21 -- host/digest.sh@73 -- # killprocess 83602 00:16:57.544 21:24:21 -- common/autotest_common.sh@936 -- # '[' -z 83602 ']' 00:16:57.544 21:24:21 -- common/autotest_common.sh@940 -- # kill -0 83602 00:16:57.544 21:24:21 -- common/autotest_common.sh@941 -- # uname 00:16:57.544 21:24:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.544 21:24:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83602 00:16:57.544 killing process with pid 83602 00:16:57.544 Received shutdown signal, test time was about 2.000000 seconds 00:16:57.544 00:16:57.544 Latency(us) 00:16:57.544 [2024-11-28T21:24:21.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.544 [2024-11-28T21:24:21.287Z] =================================================================================================================== 00:16:57.544 [2024-11-28T21:24:21.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.544 21:24:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:57.544 21:24:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:57.544 21:24:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83602' 00:16:57.544 21:24:21 -- common/autotest_common.sh@955 -- # kill 83602 00:16:57.544 21:24:21 -- common/autotest_common.sh@960 -- # wait 83602 00:16:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:57.803 21:24:21 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:57.803 21:24:21 -- host/digest.sh@54 -- # local rw bs qd 00:16:57.803 21:24:21 -- host/digest.sh@56 -- # rw=randwrite 00:16:57.803 21:24:21 -- host/digest.sh@56 -- # bs=131072 00:16:57.803 21:24:21 -- host/digest.sh@56 -- # qd=16 00:16:57.803 21:24:21 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:57.803 21:24:21 -- host/digest.sh@58 -- # bperfpid=83649 00:16:57.803 21:24:21 -- host/digest.sh@60 -- # waitforlisten 83649 /var/tmp/bperf.sock 00:16:57.803 21:24:21 -- common/autotest_common.sh@829 -- # '[' -z 83649 ']' 00:16:57.803 21:24:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:57.803 21:24:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.803 21:24:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:57.803 21:24:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.803 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:16:57.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:57.803 Zero copy mechanism will not be used. 00:16:57.803 [2024-11-28 21:24:21.366484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:57.803 [2024-11-28 21:24:21.366571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83649 ] 00:16:57.803 [2024-11-28 21:24:21.499257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.803 [2024-11-28 21:24:21.531725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.062 21:24:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.062 21:24:21 -- common/autotest_common.sh@862 -- # return 0 00:16:58.062 21:24:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.062 21:24:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.321 21:24:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:58.321 21:24:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.321 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:16:58.321 21:24:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.321 21:24:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.321 21:24:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.579 nvme0n1 00:16:58.579 21:24:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:58.579 21:24:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.579 21:24:22 -- common/autotest_common.sh@10 -- # set +x 00:16:58.579 21:24:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.579 21:24:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:58.579 21:24:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:58.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:58.839 Zero copy mechanism will not be used. 00:16:58.839 Running I/O for 2 seconds... 00:16:58.839 [2024-11-28 21:24:22.361838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.839 [2024-11-28 21:24:22.362190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.839 [2024-11-28 21:24:22.362221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.839 [2024-11-28 21:24:22.366932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.839 [2024-11-28 21:24:22.367288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.839 [2024-11-28 21:24:22.367320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.839 [2024-11-28 21:24:22.372178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.839 [2024-11-28 21:24:22.372470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.839 [2024-11-28 21:24:22.372499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.839 [2024-11-28 21:24:22.377190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.839 [2024-11-28 21:24:22.377487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.839 [2024-11-28 21:24:22.377515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.839 [2024-11-28 21:24:22.382117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.839 [2024-11-28 21:24:22.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.382426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.387038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.387377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.387406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.391951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.392251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.392278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.396877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.397215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.397244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.401792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.402100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.402127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.406699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.407011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.411687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.411965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.411992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.416583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.416883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.416911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.421554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.421831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.421859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.426552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.426839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.426866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.431509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.431816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.431855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.436430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.436754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.441145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.441442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.441469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.446084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.446364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.446390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.450909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.451239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.451266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.455978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.456307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.456335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.461621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.461956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.461986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.467194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.467572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.467598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.472823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.473197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.473230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.478408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.478738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.478767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.483939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.484259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.484286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.489486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.489824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.489855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.495004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.495383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.495413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.500694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.501056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.501092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.506260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.506538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.506565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.511868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.512208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.512236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.517374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.517697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.517726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.840 [2024-11-28 21:24:22.522948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.840 [2024-11-28 21:24:22.523330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.840 [2024-11-28 21:24:22.523359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.528594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.528931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.528963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.534249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.534534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.534562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.539785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.540149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.540187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.545234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.545530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.545557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.550288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.550568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.550595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.555317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.555654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.555681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.560436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.560727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.560754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.565395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.565687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.565714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.570344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.570623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.570650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.841 [2024-11-28 21:24:22.575499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:58.841 [2024-11-28 21:24:22.575833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.841 [2024-11-28 21:24:22.575877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.581436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.581770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.581814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.586819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.587196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.587227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.591868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.592196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.592224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.596887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.597195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.597223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.601867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.602178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.602204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.606830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.607196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.607226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.611816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.612130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.612157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.616877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.617187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.617214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.621852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.622146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.622173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.626882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.627235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.627266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.631907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.632202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.632230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.636861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.637201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.637231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.101 [2024-11-28 21:24:22.641860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.101 [2024-11-28 21:24:22.642170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.101 [2024-11-28 21:24:22.642197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.646786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.647112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.647147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.651859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.652213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.652241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.656827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.657153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.657181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.661790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.662097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.662124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.666805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.667134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.667202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.671863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.672174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.672201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.676710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.676989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.677040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.681724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.682000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.682036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.686625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.686922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.686950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.691863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.692168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.692196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.696746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.697049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.697075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.701802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.702109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.702137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.706774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.707098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.707125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.711909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.712219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.712246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.716776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.717098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.717125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.721812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.722124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.722152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.726771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.727103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.727131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.731840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.732147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.732174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.736796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.737146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.741992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.742298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.746943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.747312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.747342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.751863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.752213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.756901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.757233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.757261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.761863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.762171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.762199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.766831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.767199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.767229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.771912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.772196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.772222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.102 [2024-11-28 21:24:22.776825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.102 [2024-11-28 21:24:22.777149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.102 [2024-11-28 21:24:22.777177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.781845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.782155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.782182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.786781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.787105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.787133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.792249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.792578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.797757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.798142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.798170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.803373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.803730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.803759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.808954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.809317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.809345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.814142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.814426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.814453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.819254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.819610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.819637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.824477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.824780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.824807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.829663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.829941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.829968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.834863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.835243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.835273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.103 [2024-11-28 21:24:22.840406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.103 [2024-11-28 21:24:22.840748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-11-28 21:24:22.840792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.845774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.846105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.846132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.851121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.851470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.851515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.856127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.856407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.856433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.861194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.861525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.866164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.866445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.866472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.871248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.871589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.871615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.876257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.876537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.876565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.881324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.394 [2024-11-28 21:24:22.881618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.394 [2024-11-28 21:24:22.881645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.394 [2024-11-28 21:24:22.886332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.886611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.886638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.891321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.891673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.891700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.896358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.896636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.896662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.901500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.901799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.901827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.906493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.906772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.906799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.911528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.911820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.911847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.916514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.916793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.916820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.921585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.921907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.921936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.926780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.927122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.927176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.932478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.932798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.932825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.938071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.938430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.938457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.943639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.943947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.943976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.949268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.949606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.949676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.954715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.954999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.955054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.960248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.960622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.960651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.965809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.966164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.966194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.971320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.971653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.971680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.976625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.976911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.976939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.981806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.982156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.982184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.986982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.987340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.987370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.992296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.992579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.992607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:22.997478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:22.997801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:22.997832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:23.002608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:23.002899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:23.002928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:23.007876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:23.008199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:23.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:23.013177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:23.013462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:23.013489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:23.018374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:23.018734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:23.018764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.395 [2024-11-28 21:24:23.024117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.395 [2024-11-28 21:24:23.024420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.395 [2024-11-28 21:24:23.024450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.029366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.029688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.029716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.034641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.034974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.035033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.039918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.040241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.040269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.044910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.045207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.045234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.050114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.050399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.050426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.055111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.055447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.055489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.060168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.060504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.060540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.065237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.065522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.065549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.070305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.070631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.070661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.075473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.075789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.075817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.080845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.081163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.081191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.085844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.086132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.086159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.090675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.090962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.090990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.095999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.096350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.096378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.101587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.101864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.101890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.106514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.106791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.106818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.111697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.111992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.112044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.116707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.116986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.117007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.121666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.121943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.121970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.126555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.126840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.126867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.396 [2024-11-28 21:24:23.131595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.396 [2024-11-28 21:24:23.131891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.396 [2024-11-28 21:24:23.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.656 [2024-11-28 21:24:23.136983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.656 [2024-11-28 21:24:23.137323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.656 [2024-11-28 21:24:23.137364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.656 [2024-11-28 21:24:23.142408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.656 [2024-11-28 21:24:23.142720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.142748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.147804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.148161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.148189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.153406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.153716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.158860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.159254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.159285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.164374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.164695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.164726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.169712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.170026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.170078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.174963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.175369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.180174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.180468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.180511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.185303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.185598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.190212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.190497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.190525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.195505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.195803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.195832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.200503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.200810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.200838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.205773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.206113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.206142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.210774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.211071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.211099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.215897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.216292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.216322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.220921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.221257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.221300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.225889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.226210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.226238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.231199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.231520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.231564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.236202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.236487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.236514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.241319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.241621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.241648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.246341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.246628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.246655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.251375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.251757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.256715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.257027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.257080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.261779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.262091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.262120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.266963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.267310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.267340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.272071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.272377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.272405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.277314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.277623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.277650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.282499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.282826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.282855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.288210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.288547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.288577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.293756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.294117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.294144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.299283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.299642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.299708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.304668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.304990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.305057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.309896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.310226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.310253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.315066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.315402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.315444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.319998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.320313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.320340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.325097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.325377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.325404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.330039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.330367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.330394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.334962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.335283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.335311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.339920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.340207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.340234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.344836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.345164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.345191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.349883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.350203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.350231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.355288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.355610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.355636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.360722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.361007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.361042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.365727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.366006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.366027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.370767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.371110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.371138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.375928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.376275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.657 [2024-11-28 21:24:23.376303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.657 [2024-11-28 21:24:23.381032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.657 [2024-11-28 21:24:23.381325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.658 [2024-11-28 21:24:23.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.658 [2024-11-28 21:24:23.386082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.658 [2024-11-28 21:24:23.386359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.658 [2024-11-28 21:24:23.386386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.658 [2024-11-28 21:24:23.390931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.658 [2024-11-28 21:24:23.391295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.658 [2024-11-28 21:24:23.391325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.658 [2024-11-28 21:24:23.396561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.658 [2024-11-28 21:24:23.396910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.658 [2024-11-28 21:24:23.396938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.917 [2024-11-28 21:24:23.401839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.917 [2024-11-28 21:24:23.402158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.917 [2024-11-28 21:24:23.402185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.917 [2024-11-28 21:24:23.406863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.407181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.407210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.411854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.412164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.412191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.416864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.417191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.417218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.422001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.422290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.422316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.426922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.427258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.427286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.431841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.432147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.432175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.436742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.437027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.437096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.441718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.441999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.442034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.446641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.446922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.446949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.451628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.451908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.451935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.456624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.456929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.456956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.461586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.461863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.461889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.466547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.466850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.471636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.471931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.471957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.476693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.476987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.477026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.481746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.482049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.482091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.486704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.486982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.487020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.491687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.491972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.492010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.497091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.497411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.497441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.502610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.502935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.502965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.507905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.508229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.508256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.513053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.513359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.517926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.518216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.518243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.522858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.523216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.523240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.527873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.528172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.528199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.532818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.533107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.533134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.537708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.537992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.538042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.542578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.542881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.542908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.547611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.547891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.547917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.552525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.552820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.552848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.557375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.557655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.557682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.562284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.562562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.562588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.567333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.567674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.567701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.572391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.572675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.577308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.577593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.577621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.582272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.582549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.582576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.587342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.587679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.587707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.592377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.592663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.592690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.597288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.597564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.597591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.602259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.602538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.602565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.607262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.607601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.607627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.612475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.612768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.612796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.617956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.618258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.618285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.622971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.623336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.623367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.627976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.628281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.628307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.632916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.633210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.918 [2024-11-28 21:24:23.633237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.918 [2024-11-28 21:24:23.637823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.918 [2024-11-28 21:24:23.638130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.919 [2024-11-28 21:24:23.638157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.919 [2024-11-28 21:24:23.642759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.919 [2024-11-28 21:24:23.643067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.919 [2024-11-28 21:24:23.643093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.919 [2024-11-28 21:24:23.647828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.919 [2024-11-28 21:24:23.648137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.919 [2024-11-28 21:24:23.648163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.919 [2024-11-28 21:24:23.652715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.919 [2024-11-28 21:24:23.653010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.919 [2024-11-28 21:24:23.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.919 [2024-11-28 21:24:23.658193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:16:59.919 [2024-11-28 21:24:23.658479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.919 [2024-11-28 21:24:23.658506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.663416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.663734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.663762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.668512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.668798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.668826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.673624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.673911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.673938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.678567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.678877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.678905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.683709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.683989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.684027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.688647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.688943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.688969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.693652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.693929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.693955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.698495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.698811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.698840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.704090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.704446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.704474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.709565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.709881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.709910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.714709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.714995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.715043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.719834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.720164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.720191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.724827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.725135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.725164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.729869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.730199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.734910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.735264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.735294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.739873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.740219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.744824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.745158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.749696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.749977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.750029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.754735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.755020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.755070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.759778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.760069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.760107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.764809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.179 [2024-11-28 21:24:23.765116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.179 [2024-11-28 21:24:23.765143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.179 [2024-11-28 21:24:23.769790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.770118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.770145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.774803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.775128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.775197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.779806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.780100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.780137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.784740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.785027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.785052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.789683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.789989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.794685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.794968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.794995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.799734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.800010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.800046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.804691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.804970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.804996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.809769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.810077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.810104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.814785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.815111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.815137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.819870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.820182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.820209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.824930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.825222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.825249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.829891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.830223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.830251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.834940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.835329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.835359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.839920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.840209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.840236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.844921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.845214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.845240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.849909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.850199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.850225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.854893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.855290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.855321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.859979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.860315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.860342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.864992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.865281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.865307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.870274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.870621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.870664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.875848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.876161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.876190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.880902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.881213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.881239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.885873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.886183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.886211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.890834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.891183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.891213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.895859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.896187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.180 [2024-11-28 21:24:23.896215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.180 [2024-11-28 21:24:23.900876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.180 [2024-11-28 21:24:23.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.181 [2024-11-28 21:24:23.901213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.181 [2024-11-28 21:24:23.905836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.181 [2024-11-28 21:24:23.906144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.181 [2024-11-28 21:24:23.906172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.181 [2024-11-28 21:24:23.910840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.181 [2024-11-28 21:24:23.911213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.181 [2024-11-28 21:24:23.911254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.181 [2024-11-28 21:24:23.916117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.181 [2024-11-28 21:24:23.916543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.181 [2024-11-28 21:24:23.916573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.921799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.922111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.922142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.927348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.927711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.927740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.932476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.932786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.937507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.937787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.937814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.942483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.942776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.942803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.948201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.948485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.953568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.953934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.958924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.959307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.959336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.964383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.964716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.964745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.970168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.970530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.970555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.975710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.975990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.976061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.981190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.981555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.981582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.986589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.986866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.986893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.991933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.992256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.992284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:23.997252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:23.997563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:23.997590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:24.002525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:24.002806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.440 [2024-11-28 21:24:24.002833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.440 [2024-11-28 21:24:24.007723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.440 [2024-11-28 21:24:24.008002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.008037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.012616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.012919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.012946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.017629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.017910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.017937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.022545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.022823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.022849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.027538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.027828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.027856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.032434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.032732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.032760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.037370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.037648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.037675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.042266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.042545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.042572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.047269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.047585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.047612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.052372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.052676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.057477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.057766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.057793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.062476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.062759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.062786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.067624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.067908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.067935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.072687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.072981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.073016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.078081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.078389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.078452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.083774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.084093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.084135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.089112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.089482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.089509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.094707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.095000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.095073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.100273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.100581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.100609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.105648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.105941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.105969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.111121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.111470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.111516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.116296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.116590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.116617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.121379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.121668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.121696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.126531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.126823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.126851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.132153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.132455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.132485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.137690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.137974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.138011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.142685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.441 [2024-11-28 21:24:24.142969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.441 [2024-11-28 21:24:24.142996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.441 [2024-11-28 21:24:24.147781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.148100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.442 [2024-11-28 21:24:24.152910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.153261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.153290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.442 [2024-11-28 21:24:24.158105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.158396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.158424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.442 [2024-11-28 21:24:24.163191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.163552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.163578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.442 [2024-11-28 21:24:24.168746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.169090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.169120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.442 [2024-11-28 21:24:24.174541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.174893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.174924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.442 [2024-11-28 21:24:24.180431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.442 [2024-11-28 21:24:24.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.442 [2024-11-28 21:24:24.180790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.186141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.186446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.186469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.191717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.192092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.192131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.197272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.197560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.197588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.202720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.203080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.203117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.208224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.208524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.208551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.213681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.214005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.214090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.219282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.219617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.219672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.224813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.225147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.225179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.230189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.230503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.230531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.235239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.235654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.240365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.240651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.240683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.245261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.245546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.245576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.250166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.250456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.250483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.255122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.255515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.255561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.260254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.260551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.260578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.265145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.265448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.265506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.270224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.270531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.270561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.275466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.275832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.275863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.280540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.280821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.280850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.285445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.285720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.285747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.290376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.290663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.290689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.295623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.295918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.295946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.300988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.301405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.301437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.306434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.306759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.306791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.311975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.312319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.312347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.317367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.317682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.317711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.322721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.323030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.323068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.328062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.328390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.328421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.333254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.333559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.333589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.338452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.338767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.338795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.343730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.344064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.344100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.348855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.349167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.349195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.701 [2024-11-28 21:24:24.353814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc20e30) with pdu=0x2000190fef90 00:17:00.701 [2024-11-28 21:24:24.354133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.701 [2024-11-28 21:24:24.354160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.701 00:17:00.701 Latency(us) 00:17:00.701 [2024-11-28T21:24:24.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.701 [2024-11-28T21:24:24.444Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:00.701 nvme0n1 : 2.00 5997.97 749.75 0.00 0.00 2661.85 2174.60 5928.03 00:17:00.701 [2024-11-28T21:24:24.444Z] =================================================================================================================== 00:17:00.701 [2024-11-28T21:24:24.444Z] Total : 5997.97 749.75 0.00 0.00 2661.85 2174.60 5928.03 00:17:00.701 0 00:17:00.701 21:24:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:00.701 21:24:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:00.701 21:24:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:00.701 21:24:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:00.701 | .driver_specific 00:17:00.701 | .nvme_error 00:17:00.701 | .status_code 00:17:00.701 | .command_transient_transport_error' 00:17:00.959 21:24:24 -- host/digest.sh@71 -- # (( 387 > 0 )) 00:17:00.959 21:24:24 -- host/digest.sh@73 -- # killprocess 83649 00:17:00.959 21:24:24 -- common/autotest_common.sh@936 -- # '[' -z 83649 ']' 00:17:00.959 21:24:24 -- common/autotest_common.sh@940 -- # kill -0 83649 00:17:00.959 21:24:24 -- common/autotest_common.sh@941 -- # uname 00:17:00.959 21:24:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.959 21:24:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83649 00:17:01.217 killing process with pid 83649 00:17:01.217 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.217 00:17:01.217 Latency(us) 00:17:01.217 [2024-11-28T21:24:24.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.217 [2024-11-28T21:24:24.960Z] =================================================================================================================== 00:17:01.217 [2024-11-28T21:24:24.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.217 21:24:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:01.217 21:24:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:01.217 21:24:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83649' 00:17:01.217 21:24:24 -- common/autotest_common.sh@955 -- # kill 83649 00:17:01.217 21:24:24 -- common/autotest_common.sh@960 -- # wait 83649 00:17:01.217 21:24:24 -- host/digest.sh@115 -- # killprocess 83446 00:17:01.217 21:24:24 -- common/autotest_common.sh@936 -- # '[' -z 83446 ']' 00:17:01.217 21:24:24 -- common/autotest_common.sh@940 -- # kill -0 83446 00:17:01.217 21:24:24 -- common/autotest_common.sh@941 -- # uname 00:17:01.217 21:24:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.217 21:24:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83446 00:17:01.217 killing process with pid 83446 00:17:01.217 21:24:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:01.217 21:24:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:01.217 21:24:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83446' 00:17:01.217 21:24:24 -- common/autotest_common.sh@955 -- # kill 83446 00:17:01.217 21:24:24 -- common/autotest_common.sh@960 -- # wait 83446 00:17:01.473 ************************************ 00:17:01.473 END TEST nvmf_digest_error 00:17:01.473 ************************************ 00:17:01.473 00:17:01.473 real 0m16.567s 00:17:01.473 user 0m31.821s 00:17:01.473 sys 0m4.576s 00:17:01.473 21:24:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.473 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 21:24:25 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:01.473 21:24:25 -- host/digest.sh@139 -- # nvmftestfini 00:17:01.473 21:24:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:01.473 21:24:25 -- nvmf/common.sh@116 -- # sync 00:17:01.473 21:24:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:01.473 21:24:25 -- nvmf/common.sh@119 -- # set +e 00:17:01.473 21:24:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:01.473 21:24:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:01.473 rmmod nvme_tcp 00:17:01.473 rmmod nvme_fabrics 00:17:01.473 rmmod nvme_keyring 00:17:01.473 21:24:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:01.473 21:24:25 -- nvmf/common.sh@123 -- # set -e 00:17:01.473 21:24:25 -- nvmf/common.sh@124 -- # return 0 00:17:01.473 21:24:25 -- nvmf/common.sh@477 -- # '[' -n 83446 ']' 00:17:01.473 21:24:25 -- nvmf/common.sh@478 -- # killprocess 83446 00:17:01.473 21:24:25 -- common/autotest_common.sh@936 -- # '[' -z 83446 ']' 00:17:01.473 21:24:25 -- common/autotest_common.sh@940 -- # kill -0 83446 00:17:01.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83446) - No such process 00:17:01.473 Process with pid 83446 is not found 00:17:01.473 21:24:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83446 is not found' 00:17:01.473 21:24:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:01.473 21:24:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:01.473 21:24:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:01.473 21:24:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.473 21:24:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:01.473 21:24:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.473 21:24:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.473 21:24:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.473 21:24:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:01.473 00:17:01.473 real 0m33.931s 00:17:01.473 user 1m3.826s 00:17:01.473 sys 0m9.355s 00:17:01.473 21:24:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.473 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 ************************************ 00:17:01.473 END TEST nvmf_digest 00:17:01.473 ************************************ 00:17:01.731 21:24:25 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:01.731 21:24:25 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:01.731 21:24:25 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:01.731 21:24:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:01.731 21:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.731 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:17:01.731 ************************************ 00:17:01.731 START TEST nvmf_multipath 00:17:01.731 ************************************ 00:17:01.731 21:24:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:01.731 * Looking for test storage... 00:17:01.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.732 21:24:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:01.732 21:24:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:01.732 21:24:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:01.732 21:24:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:01.732 21:24:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:01.732 21:24:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:01.732 21:24:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:01.732 21:24:25 -- scripts/common.sh@335 -- # IFS=.-: 00:17:01.732 21:24:25 -- scripts/common.sh@335 -- # read -ra ver1 00:17:01.732 21:24:25 -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.732 21:24:25 -- scripts/common.sh@336 -- # read -ra ver2 00:17:01.732 21:24:25 -- scripts/common.sh@337 -- # local 'op=<' 00:17:01.732 21:24:25 -- scripts/common.sh@339 -- # ver1_l=2 00:17:01.732 21:24:25 -- scripts/common.sh@340 -- # ver2_l=1 00:17:01.732 21:24:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:01.732 21:24:25 -- scripts/common.sh@343 -- # case "$op" in 00:17:01.732 21:24:25 -- scripts/common.sh@344 -- # : 1 00:17:01.732 21:24:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:01.732 21:24:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.732 21:24:25 -- scripts/common.sh@364 -- # decimal 1 00:17:01.732 21:24:25 -- scripts/common.sh@352 -- # local d=1 00:17:01.732 21:24:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.732 21:24:25 -- scripts/common.sh@354 -- # echo 1 00:17:01.732 21:24:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:01.732 21:24:25 -- scripts/common.sh@365 -- # decimal 2 00:17:01.732 21:24:25 -- scripts/common.sh@352 -- # local d=2 00:17:01.732 21:24:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.732 21:24:25 -- scripts/common.sh@354 -- # echo 2 00:17:01.732 21:24:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:01.732 21:24:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:01.732 21:24:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:01.732 21:24:25 -- scripts/common.sh@367 -- # return 0 00:17:01.732 21:24:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.732 21:24:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.732 --rc genhtml_branch_coverage=1 00:17:01.732 --rc genhtml_function_coverage=1 00:17:01.732 --rc genhtml_legend=1 00:17:01.732 --rc geninfo_all_blocks=1 00:17:01.732 --rc geninfo_unexecuted_blocks=1 00:17:01.732 00:17:01.732 ' 00:17:01.732 21:24:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.732 --rc genhtml_branch_coverage=1 00:17:01.732 --rc genhtml_function_coverage=1 00:17:01.732 --rc genhtml_legend=1 00:17:01.732 --rc geninfo_all_blocks=1 00:17:01.732 --rc geninfo_unexecuted_blocks=1 00:17:01.732 00:17:01.732 ' 00:17:01.732 21:24:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.732 --rc genhtml_branch_coverage=1 00:17:01.732 --rc genhtml_function_coverage=1 00:17:01.732 --rc genhtml_legend=1 00:17:01.732 --rc geninfo_all_blocks=1 00:17:01.732 --rc geninfo_unexecuted_blocks=1 00:17:01.732 00:17:01.732 ' 00:17:01.732 21:24:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.732 --rc genhtml_branch_coverage=1 00:17:01.732 --rc genhtml_function_coverage=1 00:17:01.732 --rc genhtml_legend=1 00:17:01.732 --rc geninfo_all_blocks=1 00:17:01.732 --rc geninfo_unexecuted_blocks=1 00:17:01.732 00:17:01.732 ' 00:17:01.732 21:24:25 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.732 21:24:25 -- nvmf/common.sh@7 -- # uname -s 00:17:01.732 21:24:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.732 21:24:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.732 21:24:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.732 21:24:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.732 21:24:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.732 21:24:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.732 21:24:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.732 21:24:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.732 21:24:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.732 21:24:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.732 21:24:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:17:01.732 21:24:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:17:01.732 21:24:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.732 21:24:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.732 21:24:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.732 21:24:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.732 21:24:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.732 21:24:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.732 21:24:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.732 21:24:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.732 21:24:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.732 21:24:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.732 21:24:25 -- paths/export.sh@5 -- # export PATH 00:17:01.732 21:24:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.732 21:24:25 -- nvmf/common.sh@46 -- # : 0 00:17:01.732 21:24:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.732 21:24:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.732 21:24:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.732 21:24:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.732 21:24:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.732 21:24:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.732 21:24:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.732 21:24:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.732 21:24:25 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.732 21:24:25 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.732 21:24:25 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.732 21:24:25 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:01.732 21:24:25 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.732 21:24:25 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:01.732 21:24:25 -- host/multipath.sh@30 -- # nvmftestinit 00:17:01.732 21:24:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:01.732 21:24:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.732 21:24:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:01.732 21:24:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:01.732 21:24:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:01.732 21:24:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.732 21:24:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.732 21:24:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.732 21:24:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:01.732 21:24:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:01.732 21:24:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:01.732 21:24:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:01.732 21:24:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:01.732 21:24:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:01.732 21:24:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.732 21:24:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.732 21:24:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.732 21:24:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:01.732 21:24:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.732 21:24:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.732 21:24:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.732 21:24:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.732 21:24:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.732 21:24:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.732 21:24:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.732 21:24:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.732 21:24:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:01.990 21:24:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:01.990 Cannot find device "nvmf_tgt_br" 00:17:01.990 21:24:25 -- nvmf/common.sh@154 -- # true 00:17:01.990 21:24:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.990 Cannot find device "nvmf_tgt_br2" 00:17:01.990 21:24:25 -- nvmf/common.sh@155 -- # true 00:17:01.990 21:24:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:01.990 21:24:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:01.990 Cannot find device "nvmf_tgt_br" 00:17:01.990 21:24:25 -- nvmf/common.sh@157 -- # true 00:17:01.990 21:24:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:01.990 Cannot find device "nvmf_tgt_br2" 00:17:01.990 21:24:25 -- nvmf/common.sh@158 -- # true 00:17:01.990 21:24:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:01.990 21:24:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:01.990 21:24:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.990 21:24:25 -- nvmf/common.sh@161 -- # true 00:17:01.990 21:24:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.990 21:24:25 -- nvmf/common.sh@162 -- # true 00:17:01.990 21:24:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.990 21:24:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.990 21:24:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.990 21:24:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.990 21:24:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.990 21:24:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.990 21:24:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.990 21:24:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.990 21:24:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:01.990 21:24:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:01.990 21:24:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:01.990 21:24:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:01.990 21:24:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:01.990 21:24:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.990 21:24:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.990 21:24:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.990 21:24:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:01.990 21:24:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:01.990 21:24:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.990 21:24:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.990 21:24:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.249 21:24:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.249 21:24:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.249 21:24:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:02.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:02.249 00:17:02.249 --- 10.0.0.2 ping statistics --- 00:17:02.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.249 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:02.249 21:24:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:02.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:02.249 00:17:02.249 --- 10.0.0.3 ping statistics --- 00:17:02.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.249 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:02.249 21:24:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:02.249 00:17:02.249 --- 10.0.0.1 ping statistics --- 00:17:02.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.249 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:02.249 21:24:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.249 21:24:25 -- nvmf/common.sh@421 -- # return 0 00:17:02.249 21:24:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:02.249 21:24:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.249 21:24:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:02.249 21:24:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:02.249 21:24:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.249 21:24:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:02.249 21:24:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:02.249 21:24:25 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:02.249 21:24:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:02.249 21:24:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.249 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:17:02.249 21:24:25 -- nvmf/common.sh@469 -- # nvmfpid=83912 00:17:02.249 21:24:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:02.249 21:24:25 -- nvmf/common.sh@470 -- # waitforlisten 83912 00:17:02.249 21:24:25 -- common/autotest_common.sh@829 -- # '[' -z 83912 ']' 00:17:02.249 21:24:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.249 21:24:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.249 21:24:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.249 21:24:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.249 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:17:02.249 [2024-11-28 21:24:25.835221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:02.249 [2024-11-28 21:24:25.835313] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.249 [2024-11-28 21:24:25.968279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:02.506 [2024-11-28 21:24:26.003169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:02.506 [2024-11-28 21:24:26.003309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.507 [2024-11-28 21:24:26.003324] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.507 [2024-11-28 21:24:26.003333] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.507 [2024-11-28 21:24:26.003998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.507 [2024-11-28 21:24:26.004059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.072 21:24:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.072 21:24:26 -- common/autotest_common.sh@862 -- # return 0 00:17:03.072 21:24:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:03.072 21:24:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.072 21:24:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.330 21:24:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.330 21:24:26 -- host/multipath.sh@33 -- # nvmfapp_pid=83912 00:17:03.330 21:24:26 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:03.330 [2024-11-28 21:24:27.048325] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.330 21:24:27 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:03.898 Malloc0 00:17:03.898 21:24:27 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:03.898 21:24:27 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.156 21:24:27 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.415 [2024-11-28 21:24:28.050106] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.415 21:24:28 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:04.691 [2024-11-28 21:24:28.270159] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:04.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.691 21:24:28 -- host/multipath.sh@44 -- # bdevperf_pid=83970 00:17:04.691 21:24:28 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:04.691 21:24:28 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.691 21:24:28 -- host/multipath.sh@47 -- # waitforlisten 83970 /var/tmp/bdevperf.sock 00:17:04.691 21:24:28 -- common/autotest_common.sh@829 -- # '[' -z 83970 ']' 00:17:04.691 21:24:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.691 21:24:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.691 21:24:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.691 21:24:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.691 21:24:28 -- common/autotest_common.sh@10 -- # set +x 00:17:05.642 21:24:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.642 21:24:29 -- common/autotest_common.sh@862 -- # return 0 00:17:05.642 21:24:29 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:05.911 21:24:29 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:06.170 Nvme0n1 00:17:06.170 21:24:29 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:06.736 Nvme0n1 00:17:06.736 21:24:30 -- host/multipath.sh@78 -- # sleep 1 00:17:06.736 21:24:30 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:07.671 21:24:31 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:07.671 21:24:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:07.930 21:24:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:08.189 21:24:31 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:08.189 21:24:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.189 21:24:31 -- host/multipath.sh@65 -- # dtrace_pid=84015 00:17:08.189 21:24:31 -- host/multipath.sh@66 -- # sleep 6 00:17:14.756 21:24:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:14.756 21:24:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:14.756 21:24:38 -- host/multipath.sh@67 -- # active_port=4421 00:17:14.756 21:24:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.756 Attaching 4 probes... 00:17:14.756 @path[10.0.0.2, 4421]: 19175 00:17:14.756 @path[10.0.0.2, 4421]: 19914 00:17:14.756 @path[10.0.0.2, 4421]: 20091 00:17:14.756 @path[10.0.0.2, 4421]: 19654 00:17:14.756 @path[10.0.0.2, 4421]: 19710 00:17:14.756 21:24:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:14.756 21:24:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:14.756 21:24:38 -- host/multipath.sh@69 -- # sed -n 1p 00:17:14.756 21:24:38 -- host/multipath.sh@69 -- # port=4421 00:17:14.756 21:24:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:14.756 21:24:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:14.756 21:24:38 -- host/multipath.sh@72 -- # kill 84015 00:17:14.756 21:24:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.756 21:24:38 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:14.756 21:24:38 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:14.756 21:24:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:15.015 21:24:38 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:15.015 21:24:38 -- host/multipath.sh@65 -- # dtrace_pid=84136 00:17:15.015 21:24:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:15.015 21:24:38 -- host/multipath.sh@66 -- # sleep 6 00:17:21.582 21:24:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:21.582 21:24:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:21.582 21:24:44 -- host/multipath.sh@67 -- # active_port=4420 00:17:21.582 21:24:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:21.582 Attaching 4 probes... 00:17:21.582 @path[10.0.0.2, 4420]: 19386 00:17:21.582 @path[10.0.0.2, 4420]: 19872 00:17:21.582 @path[10.0.0.2, 4420]: 19928 00:17:21.582 @path[10.0.0.2, 4420]: 19950 00:17:21.582 @path[10.0.0.2, 4420]: 20081 00:17:21.582 21:24:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:21.582 21:24:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:21.582 21:24:44 -- host/multipath.sh@69 -- # sed -n 1p 00:17:21.582 21:24:44 -- host/multipath.sh@69 -- # port=4420 00:17:21.582 21:24:44 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:21.582 21:24:44 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:21.582 21:24:44 -- host/multipath.sh@72 -- # kill 84136 00:17:21.582 21:24:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:21.582 21:24:44 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:21.582 21:24:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:21.582 21:24:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:21.851 21:24:45 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:21.851 21:24:45 -- host/multipath.sh@65 -- # dtrace_pid=84254 00:17:21.851 21:24:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:21.851 21:24:45 -- host/multipath.sh@66 -- # sleep 6 00:17:28.419 21:24:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.419 21:24:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:28.419 21:24:51 -- host/multipath.sh@67 -- # active_port=4421 00:17:28.419 21:24:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.419 Attaching 4 probes... 00:17:28.419 @path[10.0.0.2, 4421]: 15924 00:17:28.419 @path[10.0.0.2, 4421]: 19498 00:17:28.419 @path[10.0.0.2, 4421]: 19812 00:17:28.419 @path[10.0.0.2, 4421]: 19693 00:17:28.419 @path[10.0.0.2, 4421]: 19589 00:17:28.419 21:24:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:28.419 21:24:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.419 21:24:51 -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.419 21:24:51 -- host/multipath.sh@69 -- # port=4421 00:17:28.419 21:24:51 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.419 21:24:51 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.419 21:24:51 -- host/multipath.sh@72 -- # kill 84254 00:17:28.419 21:24:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.419 21:24:51 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:28.419 21:24:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:28.419 21:24:52 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:28.678 21:24:52 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:28.678 21:24:52 -- host/multipath.sh@65 -- # dtrace_pid=84366 00:17:28.678 21:24:52 -- host/multipath.sh@66 -- # sleep 6 00:17:28.678 21:24:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:35.265 21:24:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:35.265 21:24:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:35.265 21:24:58 -- host/multipath.sh@67 -- # active_port= 00:17:35.265 21:24:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.265 Attaching 4 probes... 00:17:35.265 00:17:35.265 00:17:35.265 00:17:35.265 00:17:35.265 00:17:35.265 21:24:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:35.265 21:24:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:35.265 21:24:58 -- host/multipath.sh@69 -- # sed -n 1p 00:17:35.265 21:24:58 -- host/multipath.sh@69 -- # port= 00:17:35.265 21:24:58 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:35.265 21:24:58 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:35.265 21:24:58 -- host/multipath.sh@72 -- # kill 84366 00:17:35.265 21:24:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.265 21:24:58 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:35.265 21:24:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:35.265 21:24:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:35.523 21:24:59 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:35.523 21:24:59 -- host/multipath.sh@65 -- # dtrace_pid=84484 00:17:35.523 21:24:59 -- host/multipath.sh@66 -- # sleep 6 00:17:35.523 21:24:59 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:42.097 21:25:05 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:42.097 21:25:05 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:42.097 21:25:05 -- host/multipath.sh@67 -- # active_port=4421 00:17:42.097 21:25:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.097 Attaching 4 probes... 00:17:42.097 @path[10.0.0.2, 4421]: 19323 00:17:42.097 @path[10.0.0.2, 4421]: 19255 00:17:42.097 @path[10.0.0.2, 4421]: 19319 00:17:42.097 @path[10.0.0.2, 4421]: 19331 00:17:42.097 @path[10.0.0.2, 4421]: 19645 00:17:42.097 21:25:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:42.097 21:25:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:42.097 21:25:05 -- host/multipath.sh@69 -- # sed -n 1p 00:17:42.097 21:25:05 -- host/multipath.sh@69 -- # port=4421 00:17:42.097 21:25:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:42.097 21:25:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:42.097 21:25:05 -- host/multipath.sh@72 -- # kill 84484 00:17:42.097 21:25:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.097 21:25:05 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:42.097 [2024-11-28 21:25:05.622456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.097 [2024-11-28 21:25:05.622844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 [2024-11-28 21:25:05.622903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b3780 is same with the state(5) to be set 00:17:42.098 21:25:05 -- host/multipath.sh@101 -- # sleep 1 00:17:43.034 21:25:06 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:43.034 21:25:06 -- host/multipath.sh@65 -- # dtrace_pid=84602 00:17:43.034 21:25:06 -- host/multipath.sh@66 -- # sleep 6 00:17:43.034 21:25:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:49.603 21:25:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:49.603 21:25:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:49.603 21:25:12 -- host/multipath.sh@67 -- # active_port=4420 00:17:49.603 21:25:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:49.603 Attaching 4 probes... 00:17:49.603 @path[10.0.0.2, 4420]: 18872 00:17:49.603 @path[10.0.0.2, 4420]: 19495 00:17:49.603 @path[10.0.0.2, 4420]: 19303 00:17:49.603 @path[10.0.0.2, 4420]: 19472 00:17:49.603 @path[10.0.0.2, 4420]: 19307 00:17:49.603 21:25:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:49.603 21:25:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:49.603 21:25:12 -- host/multipath.sh@69 -- # sed -n 1p 00:17:49.603 21:25:12 -- host/multipath.sh@69 -- # port=4420 00:17:49.603 21:25:12 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:49.603 21:25:12 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:49.603 21:25:12 -- host/multipath.sh@72 -- # kill 84602 00:17:49.603 21:25:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:49.603 21:25:12 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:49.603 [2024-11-28 21:25:13.206169] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.603 21:25:13 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:49.861 21:25:13 -- host/multipath.sh@111 -- # sleep 6 00:17:56.424 21:25:19 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:56.424 21:25:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83912 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:56.424 21:25:19 -- host/multipath.sh@65 -- # dtrace_pid=84782 00:17:56.424 21:25:19 -- host/multipath.sh@66 -- # sleep 6 00:18:03.005 21:25:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:03.005 21:25:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:03.005 21:25:25 -- host/multipath.sh@67 -- # active_port=4421 00:18:03.005 21:25:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.005 Attaching 4 probes... 00:18:03.005 @path[10.0.0.2, 4421]: 18894 00:18:03.005 @path[10.0.0.2, 4421]: 19297 00:18:03.005 @path[10.0.0.2, 4421]: 19642 00:18:03.005 @path[10.0.0.2, 4421]: 19808 00:18:03.005 @path[10.0.0.2, 4421]: 19144 00:18:03.005 21:25:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:03.005 21:25:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:03.005 21:25:25 -- host/multipath.sh@69 -- # sed -n 1p 00:18:03.005 21:25:25 -- host/multipath.sh@69 -- # port=4421 00:18:03.005 21:25:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:03.005 21:25:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:03.005 21:25:25 -- host/multipath.sh@72 -- # kill 84782 00:18:03.005 21:25:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.005 21:25:25 -- host/multipath.sh@114 -- # killprocess 83970 00:18:03.005 21:25:25 -- common/autotest_common.sh@936 -- # '[' -z 83970 ']' 00:18:03.005 21:25:25 -- common/autotest_common.sh@940 -- # kill -0 83970 00:18:03.005 21:25:25 -- common/autotest_common.sh@941 -- # uname 00:18:03.005 21:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.005 21:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83970 00:18:03.005 killing process with pid 83970 00:18:03.005 21:25:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:03.005 21:25:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:03.005 21:25:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83970' 00:18:03.005 21:25:25 -- common/autotest_common.sh@955 -- # kill 83970 00:18:03.005 21:25:25 -- common/autotest_common.sh@960 -- # wait 83970 00:18:03.005 Connection closed with partial response: 00:18:03.005 00:18:03.005 00:18:03.005 21:25:25 -- host/multipath.sh@116 -- # wait 83970 00:18:03.005 21:25:25 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:03.005 [2024-11-28 21:24:28.329606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:03.005 [2024-11-28 21:24:28.329686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83970 ] 00:18:03.005 [2024-11-28 21:24:28.462783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.005 [2024-11-28 21:24:28.501813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.005 Running I/O for 90 seconds... 00:18:03.005 [2024-11-28 21:24:38.518069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.005 [2024-11-28 21:24:38.518158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.005 [2024-11-28 21:24:38.518271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.005 [2024-11-28 21:24:38.518341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.005 [2024-11-28 21:24:38.518376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.005 [2024-11-28 21:24:38.518532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.005 [2024-11-28 21:24:38.518565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.518604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.518638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.518673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.518707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.518742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.518777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.518811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.518846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.518899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.518915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.520755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.520790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.520817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.520834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.520869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.520887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.520907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.520922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.520943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.520958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.520978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.521043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.521079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.521548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.521619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.521655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.521710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.006 [2024-11-28 21:24:38.521726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.523666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.523710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.523740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.006 [2024-11-28 21:24:38.523758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.006 [2024-11-28 21:24:38.523780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.523796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.523817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.523833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.523854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.523890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.523906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.523941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.523957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.523977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.523992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.524167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.524551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.524701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.524756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.524772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.527175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.527311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.007 [2024-11-28 21:24:38.527351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.007 [2024-11-28 21:24:38.527725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.007 [2024-11-28 21:24:38.527746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.527762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.527783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.527799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.527837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.527857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.527895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.527911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.527963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.527983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.527999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.528035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.528072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.528110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.528152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.528173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.528189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.529468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.529587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.529660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.529733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.529967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.529987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.008 [2024-11-28 21:24:38.530148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.008 [2024-11-28 21:24:38.530542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.008 [2024-11-28 21:24:38.530563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:38.530578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.133957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.133979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.133994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.134329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.134366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.134490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.009 [2024-11-28 21:24:45.134533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.009 [2024-11-28 21:24:45.134606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.009 [2024-11-28 21:24:45.134626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.134818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.134853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.134923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.134959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.135963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.135983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.135999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.136020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.136042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.136064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.136093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.136116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.136131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.136152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.010 [2024-11-28 21:24:45.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.010 [2024-11-28 21:24:45.136187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.010 [2024-11-28 21:24:45.136204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.136550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.136695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.136731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.136767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.136892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.136927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.136963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.136992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.137008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.137039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.137057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.137078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.137093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.137113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.137128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.137148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.137163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.137183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.137198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.137234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.137249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.138534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.138586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.138624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.138661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.138698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.138749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.138785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.138821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.138858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.138894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.011 [2024-11-28 21:24:45.138930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.011 [2024-11-28 21:24:45.138951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.011 [2024-11-28 21:24:45.138966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.138988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.139846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.139958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.139985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.012 [2024-11-28 21:24:45.140591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.012 [2024-11-28 21:24:45.140758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.012 [2024-11-28 21:24:45.140774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.140795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.140810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.140830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.140846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.140882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.140897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.140939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.140959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.140996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.141119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.141155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.141469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.141505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.141578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.141748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.141958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.141998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.142192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.142231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.013 [2024-11-28 21:24:45.142319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.013 [2024-11-28 21:24:45.142449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.013 [2024-11-28 21:24:45.142464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.142601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.142713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.142862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.142915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.142952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.142988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.014 [2024-11-28 21:24:45.143842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.014 [2024-11-28 21:24:45.143919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.014 [2024-11-28 21:24:45.143939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.143955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.143976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.143991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.144328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.144402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.144439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.144492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.144547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.144569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.144585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.145964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.145996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.015 [2024-11-28 21:24:45.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.015 [2024-11-28 21:24:45.146812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.015 [2024-11-28 21:24:45.146832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.146847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.146868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.146883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.146904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.146928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.146949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.146966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.146986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.147053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.147125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.147192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.147591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.147673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.147975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.147995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.148011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.148059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.148099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.148135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.148206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.148244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.148290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.148311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.148326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.156300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.156337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.016 [2024-11-28 21:24:45.156435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.016 [2024-11-28 21:24:45.156529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.016 [2024-11-28 21:24:45.156551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.156582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.156919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.156972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.156993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.157648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.157965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.157982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.158030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.017 [2024-11-28 21:24:45.158050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.158081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.158123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.017 [2024-11-28 21:24:45.158139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.017 [2024-11-28 21:24:45.158161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.158216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.158254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.158293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.158408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.158764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.158974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.158991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.018 [2024-11-28 21:24:45.159773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.018 [2024-11-28 21:24:45.159912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.018 [2024-11-28 21:24:45.159929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.159949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.159965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.159986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:45.160758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.160779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.160794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:45.161271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:45.161300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:52.263528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:52.263639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:52.263679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.263971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.263987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.264007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.264022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.264042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.264072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.264108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.019 [2024-11-28 21:24:52.264126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.264146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.019 [2024-11-28 21:24:52.264162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.019 [2024-11-28 21:24:52.264182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.264960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.264975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.265033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.265323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.265377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.020 [2024-11-28 21:24:52.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.020 [2024-11-28 21:24:52.265675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.020 [2024-11-28 21:24:52.265696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.265712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.265749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.265786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.265822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.265859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.265896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.265935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.265956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.265979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.266976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.266993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.267040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.267080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.267117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.021 [2024-11-28 21:24:52.267183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.267223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.021 [2024-11-28 21:24:52.267262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.021 [2024-11-28 21:24:52.267284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.267346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.267385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.267425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.267611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.267725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.267836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.267852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.268815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.268843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.268877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.268895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.268924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.268956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.268986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:24:52.269763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:24:52.269858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.022 [2024-11-28 21:24:52.269875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:25:05.622965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:25:05.623060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:25:05.623088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:25:05.623121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:25:05.623160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:25:05.623180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:25:05.623196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.022 [2024-11-28 21:25:05.623210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.022 [2024-11-28 21:25:05.623226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.023 [2024-11-28 21:25:05.623873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.023 [2024-11-28 21:25:05.623927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.623966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.623982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.023 [2024-11-28 21:25:05.623996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.023 [2024-11-28 21:25:05.624569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.023 [2024-11-28 21:25:05.624598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.023 [2024-11-28 21:25:05.624627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.023 [2024-11-28 21:25:05.624643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.624656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.624772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.624799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.624833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.624861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.624889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.624916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.624944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.624971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.624986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.624999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.024 [2024-11-28 21:25:05.625818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.024 [2024-11-28 21:25:05.625927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.024 [2024-11-28 21:25:05.625942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.625954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.625969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.625982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.625996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.626813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.626936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.626968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.627093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.627131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.627175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.627205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.627537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.627571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.627601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.025 [2024-11-28 21:25:05.627631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.627661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.025 [2024-11-28 21:25:05.627677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.025 [2024-11-28 21:25:05.627691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.026 [2024-11-28 21:25:05.627966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.627981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf85100 is same with the state(5) to be set 00:18:03.026 [2024-11-28 21:25:05.627999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.026 [2024-11-28 21:25:05.628026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.026 [2024-11-28 21:25:05.628057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114680 len:8 PRP1 0x0 PRP2 0x0 00:18:03.026 [2024-11-28 21:25:05.628072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.026 [2024-11-28 21:25:05.628123] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf85100 was disconnected and freed. reset controller. 00:18:03.026 [2024-11-28 21:25:05.629251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.026 [2024-11-28 21:25:05.629351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf943c0 (9): Bad file descriptor 00:18:03.026 [2024-11-28 21:25:05.629666] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.026 [2024-11-28 21:25:05.629741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.026 [2024-11-28 21:25:05.629793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.026 [2024-11-28 21:25:05.629817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf943c0 with addr=10.0.0.2, port=4421 00:18:03.026 [2024-11-28 21:25:05.629834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf943c0 is same with the state(5) to be set 00:18:03.026 [2024-11-28 21:25:05.629869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf943c0 (9): Bad file descriptor 00:18:03.026 [2024-11-28 21:25:05.629917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:03.026 [2024-11-28 21:25:05.629937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:03.026 [2024-11-28 21:25:05.629952] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.026 [2024-11-28 21:25:05.629985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:03.026 [2024-11-28 21:25:05.630020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.026 [2024-11-28 21:25:15.678349] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:03.026 Received shutdown signal, test time was about 55.524338 seconds 00:18:03.026 00:18:03.026 Latency(us) 00:18:03.026 [2024-11-28T21:25:26.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.026 [2024-11-28T21:25:26.769Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.026 Verification LBA range: start 0x0 length 0x4000 00:18:03.026 Nvme0n1 : 55.52 11140.97 43.52 0.00 0.00 11469.59 744.73 7015926.69 00:18:03.026 [2024-11-28T21:25:26.769Z] =================================================================================================================== 00:18:03.026 [2024-11-28T21:25:26.769Z] Total : 11140.97 43.52 0.00 0.00 11469.59 744.73 7015926.69 00:18:03.026 21:25:25 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.026 21:25:26 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:03.026 21:25:26 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:03.026 21:25:26 -- host/multipath.sh@125 -- # nvmftestfini 00:18:03.026 21:25:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:03.026 21:25:26 -- nvmf/common.sh@116 -- # sync 00:18:03.026 21:25:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:03.026 21:25:26 -- nvmf/common.sh@119 -- # set +e 00:18:03.026 21:25:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:03.026 21:25:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:03.026 rmmod nvme_tcp 00:18:03.026 rmmod nvme_fabrics 00:18:03.026 rmmod nvme_keyring 00:18:03.026 21:25:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:03.026 21:25:26 -- nvmf/common.sh@123 -- # set -e 00:18:03.026 21:25:26 -- nvmf/common.sh@124 -- # return 0 00:18:03.026 21:25:26 -- nvmf/common.sh@477 -- # '[' -n 83912 ']' 00:18:03.026 21:25:26 -- nvmf/common.sh@478 -- # killprocess 83912 00:18:03.026 21:25:26 -- common/autotest_common.sh@936 -- # '[' -z 83912 ']' 00:18:03.026 21:25:26 -- common/autotest_common.sh@940 -- # kill -0 83912 00:18:03.026 21:25:26 -- common/autotest_common.sh@941 -- # uname 00:18:03.026 21:25:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.026 21:25:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83912 00:18:03.026 21:25:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.026 killing process with pid 83912 00:18:03.026 21:25:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.026 21:25:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83912' 00:18:03.026 21:25:26 -- common/autotest_common.sh@955 -- # kill 83912 00:18:03.026 21:25:26 -- common/autotest_common.sh@960 -- # wait 83912 00:18:03.026 21:25:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:03.026 21:25:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:03.026 21:25:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:03.026 21:25:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.026 21:25:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:03.026 21:25:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.026 21:25:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.026 21:25:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.026 21:25:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:03.026 00:18:03.026 real 1m1.270s 00:18:03.026 user 2m49.245s 00:18:03.026 sys 0m18.781s 00:18:03.026 21:25:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:03.026 21:25:26 -- common/autotest_common.sh@10 -- # set +x 00:18:03.026 ************************************ 00:18:03.026 END TEST nvmf_multipath 00:18:03.026 ************************************ 00:18:03.026 21:25:26 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:03.026 21:25:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:03.026 21:25:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.026 21:25:26 -- common/autotest_common.sh@10 -- # set +x 00:18:03.026 ************************************ 00:18:03.026 START TEST nvmf_timeout 00:18:03.026 ************************************ 00:18:03.026 21:25:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:03.026 * Looking for test storage... 00:18:03.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:03.026 21:25:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:03.026 21:25:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:03.026 21:25:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:03.287 21:25:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:03.287 21:25:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:03.287 21:25:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:03.287 21:25:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:03.287 21:25:26 -- scripts/common.sh@335 -- # IFS=.-: 00:18:03.287 21:25:26 -- scripts/common.sh@335 -- # read -ra ver1 00:18:03.287 21:25:26 -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.287 21:25:26 -- scripts/common.sh@336 -- # read -ra ver2 00:18:03.287 21:25:26 -- scripts/common.sh@337 -- # local 'op=<' 00:18:03.287 21:25:26 -- scripts/common.sh@339 -- # ver1_l=2 00:18:03.287 21:25:26 -- scripts/common.sh@340 -- # ver2_l=1 00:18:03.287 21:25:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:03.287 21:25:26 -- scripts/common.sh@343 -- # case "$op" in 00:18:03.287 21:25:26 -- scripts/common.sh@344 -- # : 1 00:18:03.287 21:25:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:03.287 21:25:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.287 21:25:26 -- scripts/common.sh@364 -- # decimal 1 00:18:03.287 21:25:26 -- scripts/common.sh@352 -- # local d=1 00:18:03.287 21:25:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.287 21:25:26 -- scripts/common.sh@354 -- # echo 1 00:18:03.287 21:25:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:03.287 21:25:26 -- scripts/common.sh@365 -- # decimal 2 00:18:03.287 21:25:26 -- scripts/common.sh@352 -- # local d=2 00:18:03.287 21:25:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.287 21:25:26 -- scripts/common.sh@354 -- # echo 2 00:18:03.287 21:25:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:03.287 21:25:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:03.287 21:25:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:03.287 21:25:26 -- scripts/common.sh@367 -- # return 0 00:18:03.287 21:25:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.287 21:25:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.287 --rc genhtml_branch_coverage=1 00:18:03.287 --rc genhtml_function_coverage=1 00:18:03.287 --rc genhtml_legend=1 00:18:03.287 --rc geninfo_all_blocks=1 00:18:03.287 --rc geninfo_unexecuted_blocks=1 00:18:03.287 00:18:03.287 ' 00:18:03.287 21:25:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.287 --rc genhtml_branch_coverage=1 00:18:03.287 --rc genhtml_function_coverage=1 00:18:03.287 --rc genhtml_legend=1 00:18:03.287 --rc geninfo_all_blocks=1 00:18:03.287 --rc geninfo_unexecuted_blocks=1 00:18:03.287 00:18:03.287 ' 00:18:03.287 21:25:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.287 --rc genhtml_branch_coverage=1 00:18:03.287 --rc genhtml_function_coverage=1 00:18:03.287 --rc genhtml_legend=1 00:18:03.287 --rc geninfo_all_blocks=1 00:18:03.287 --rc geninfo_unexecuted_blocks=1 00:18:03.287 00:18:03.287 ' 00:18:03.287 21:25:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.287 --rc genhtml_branch_coverage=1 00:18:03.287 --rc genhtml_function_coverage=1 00:18:03.287 --rc genhtml_legend=1 00:18:03.287 --rc geninfo_all_blocks=1 00:18:03.287 --rc geninfo_unexecuted_blocks=1 00:18:03.287 00:18:03.287 ' 00:18:03.287 21:25:26 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:03.287 21:25:26 -- nvmf/common.sh@7 -- # uname -s 00:18:03.287 21:25:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.287 21:25:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.287 21:25:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.287 21:25:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.287 21:25:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.287 21:25:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.287 21:25:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.287 21:25:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.287 21:25:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.287 21:25:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.287 21:25:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:18:03.287 21:25:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:18:03.287 21:25:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.287 21:25:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.287 21:25:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:03.287 21:25:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:03.287 21:25:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.288 21:25:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.288 21:25:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.288 21:25:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.288 21:25:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.288 21:25:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.288 21:25:26 -- paths/export.sh@5 -- # export PATH 00:18:03.288 21:25:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.288 21:25:26 -- nvmf/common.sh@46 -- # : 0 00:18:03.288 21:25:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:03.288 21:25:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:03.288 21:25:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:03.288 21:25:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.288 21:25:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.288 21:25:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:03.288 21:25:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:03.288 21:25:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:03.288 21:25:26 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.288 21:25:26 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.288 21:25:26 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.288 21:25:26 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:03.288 21:25:26 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.288 21:25:26 -- host/timeout.sh@19 -- # nvmftestinit 00:18:03.288 21:25:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:03.288 21:25:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.288 21:25:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:03.288 21:25:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:03.288 21:25:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:03.288 21:25:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.288 21:25:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.288 21:25:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.288 21:25:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:03.288 21:25:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:03.288 21:25:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:03.288 21:25:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:03.288 21:25:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:03.288 21:25:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:03.288 21:25:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.288 21:25:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.288 21:25:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:03.288 21:25:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:03.288 21:25:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:03.288 21:25:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:03.288 21:25:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:03.288 21:25:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.288 21:25:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:03.288 21:25:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:03.288 21:25:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:03.288 21:25:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:03.288 21:25:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:03.288 21:25:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:03.288 Cannot find device "nvmf_tgt_br" 00:18:03.288 21:25:26 -- nvmf/common.sh@154 -- # true 00:18:03.288 21:25:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.288 Cannot find device "nvmf_tgt_br2" 00:18:03.288 21:25:26 -- nvmf/common.sh@155 -- # true 00:18:03.288 21:25:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:03.288 21:25:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:03.288 Cannot find device "nvmf_tgt_br" 00:18:03.288 21:25:26 -- nvmf/common.sh@157 -- # true 00:18:03.288 21:25:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:03.288 Cannot find device "nvmf_tgt_br2" 00:18:03.288 21:25:26 -- nvmf/common.sh@158 -- # true 00:18:03.288 21:25:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:03.288 21:25:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:03.288 21:25:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.288 21:25:26 -- nvmf/common.sh@161 -- # true 00:18:03.288 21:25:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.288 21:25:26 -- nvmf/common.sh@162 -- # true 00:18:03.288 21:25:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.288 21:25:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.288 21:25:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.288 21:25:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:03.288 21:25:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.288 21:25:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.288 21:25:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.288 21:25:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:03.288 21:25:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:03.288 21:25:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:03.288 21:25:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:03.288 21:25:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:03.288 21:25:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:03.288 21:25:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.288 21:25:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.288 21:25:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.289 21:25:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:03.289 21:25:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:03.289 21:25:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.289 21:25:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.289 21:25:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.548 21:25:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.548 21:25:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.548 21:25:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:03.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:03.548 00:18:03.548 --- 10.0.0.2 ping statistics --- 00:18:03.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.548 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:03.548 21:25:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:03.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:03.548 00:18:03.548 --- 10.0.0.3 ping statistics --- 00:18:03.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.548 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:03.548 21:25:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:03.548 00:18:03.548 --- 10.0.0.1 ping statistics --- 00:18:03.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.548 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:03.548 21:25:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.548 21:25:27 -- nvmf/common.sh@421 -- # return 0 00:18:03.548 21:25:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:03.548 21:25:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.548 21:25:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:03.548 21:25:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:03.548 21:25:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.548 21:25:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:03.548 21:25:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:03.548 21:25:27 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:03.548 21:25:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:03.548 21:25:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.548 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:18:03.548 21:25:27 -- nvmf/common.sh@469 -- # nvmfpid=85102 00:18:03.548 21:25:27 -- nvmf/common.sh@470 -- # waitforlisten 85102 00:18:03.548 21:25:27 -- common/autotest_common.sh@829 -- # '[' -z 85102 ']' 00:18:03.548 21:25:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:03.548 21:25:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.548 21:25:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.548 21:25:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.548 21:25:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.548 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:18:03.548 [2024-11-28 21:25:27.134503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:03.548 [2024-11-28 21:25:27.135257] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.548 [2024-11-28 21:25:27.274893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:03.807 [2024-11-28 21:25:27.308221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:03.807 [2024-11-28 21:25:27.308406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.807 [2024-11-28 21:25:27.308417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.807 [2024-11-28 21:25:27.308425] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.807 [2024-11-28 21:25:27.308562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.807 [2024-11-28 21:25:27.308575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.376 21:25:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.376 21:25:28 -- common/autotest_common.sh@862 -- # return 0 00:18:04.376 21:25:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.376 21:25:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.376 21:25:28 -- common/autotest_common.sh@10 -- # set +x 00:18:04.635 21:25:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.635 21:25:28 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.635 21:25:28 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:04.635 [2024-11-28 21:25:28.347079] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.636 21:25:28 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:05.204 Malloc0 00:18:05.204 21:25:28 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.204 21:25:28 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.463 21:25:29 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.722 [2024-11-28 21:25:29.380259] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.722 21:25:29 -- host/timeout.sh@32 -- # bdevperf_pid=85157 00:18:05.722 21:25:29 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:05.722 21:25:29 -- host/timeout.sh@34 -- # waitforlisten 85157 /var/tmp/bdevperf.sock 00:18:05.722 21:25:29 -- common/autotest_common.sh@829 -- # '[' -z 85157 ']' 00:18:05.722 21:25:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.722 21:25:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.722 21:25:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.722 21:25:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.722 21:25:29 -- common/autotest_common.sh@10 -- # set +x 00:18:05.722 [2024-11-28 21:25:29.438064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:05.722 [2024-11-28 21:25:29.438151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85157 ] 00:18:05.981 [2024-11-28 21:25:29.572797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.981 [2024-11-28 21:25:29.612694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.918 21:25:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.918 21:25:30 -- common/autotest_common.sh@862 -- # return 0 00:18:06.918 21:25:30 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:06.918 21:25:30 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:07.486 NVMe0n1 00:18:07.486 21:25:30 -- host/timeout.sh@51 -- # rpc_pid=85175 00:18:07.486 21:25:30 -- host/timeout.sh@53 -- # sleep 1 00:18:07.486 21:25:30 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:07.486 Running I/O for 10 seconds... 00:18:08.423 21:25:31 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.685 [2024-11-28 21:25:32.207509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.685 [2024-11-28 21:25:32.207678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5a60 is same with the state(5) to be set 00:18:08.686 [2024-11-28 21:25:32.207928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.208989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.208998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.686 [2024-11-28 21:25:32.209800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.686 [2024-11-28 21:25:32.209927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.209949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.209960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.686 [2024-11-28 21:25:32.209969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.686 [2024-11-28 21:25:32.210243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.210337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.210362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.210382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.686 [2024-11-28 21:25:32.210403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.686 [2024-11-28 21:25:32.210901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.686 [2024-11-28 21:25:32.210927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.686 [2024-11-28 21:25:32.210937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.210946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.210957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.210966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.211479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.211904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.211961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.211983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.211994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.212020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.212033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.212322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.212420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.212440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.212449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.212462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.212471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.212614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.212919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.213936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.213959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.213970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.214331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.214357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.214378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.687 [2024-11-28 21:25:32.214398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.214417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.214437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.214831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.214857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.214878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.214890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.215193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.215227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.687 [2024-11-28 21:25:32.215238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.687 [2024-11-28 21:25:32.215252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.215262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.215282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.215293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.215303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.215314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.215324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.215335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.215619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.215630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.215642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.215964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.216948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.216981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.216990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.217650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.217670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.217946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.217958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.218258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.218412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.218432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.688 [2024-11-28 21:25:32.218874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.688 [2024-11-28 21:25:32.218906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.688 [2024-11-28 21:25:32.218918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.218930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-28 21:25:32.218939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.218950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-28 21:25:32.219069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.219088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-28 21:25:32.219098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.219234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-28 21:25:32.219500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.219625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-28 21:25:32.219642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.219654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.689 [2024-11-28 21:25:32.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.219909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8749a0 is same with the state(5) to be set 00:18:08.689 [2024-11-28 21:25:32.219924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.689 [2024-11-28 21:25:32.219933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.689 [2024-11-28 21:25:32.220271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125472 len:8 PRP1 0x0 PRP2 0x0 00:18:08.689 [2024-11-28 21:25:32.220293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.220340] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8749a0 was disconnected and freed. reset controller. 00:18:08.689 [2024-11-28 21:25:32.220659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.689 [2024-11-28 21:25:32.220688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.220700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.689 [2024-11-28 21:25:32.220710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.220720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.689 [2024-11-28 21:25:32.220730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.220740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.689 [2024-11-28 21:25:32.220749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.689 [2024-11-28 21:25:32.220758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x879610 is same with the state(5) to be set 00:18:08.689 [2024-11-28 21:25:32.221206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.689 [2024-11-28 21:25:32.221256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x879610 (9): Bad file descriptor 00:18:08.689 [2024-11-28 21:25:32.221354] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.689 [2024-11-28 21:25:32.221559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.689 [2024-11-28 21:25:32.221739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.689 [2024-11-28 21:25:32.221863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x879610 with addr=10.0.0.2, port=4420 00:18:08.689 [2024-11-28 21:25:32.221876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x879610 is same with the state(5) to be set 00:18:08.689 [2024-11-28 21:25:32.221899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x879610 (9): Bad file descriptor 00:18:08.689 [2024-11-28 21:25:32.221916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:08.689 [2024-11-28 21:25:32.221925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:08.689 [2024-11-28 21:25:32.221935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.689 [2024-11-28 21:25:32.221957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:08.689 [2024-11-28 21:25:32.222344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.689 21:25:32 -- host/timeout.sh@56 -- # sleep 2 00:18:10.590 [2024-11-28 21:25:34.222495] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.590 [2024-11-28 21:25:34.222629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.590 [2024-11-28 21:25:34.222672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.590 [2024-11-28 21:25:34.222689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x879610 with addr=10.0.0.2, port=4420 00:18:10.590 [2024-11-28 21:25:34.222701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x879610 is same with the state(5) to be set 00:18:10.590 [2024-11-28 21:25:34.222728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x879610 (9): Bad file descriptor 00:18:10.590 [2024-11-28 21:25:34.222746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.590 [2024-11-28 21:25:34.222755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:10.590 [2024-11-28 21:25:34.222765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.590 [2024-11-28 21:25:34.222791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.590 [2024-11-28 21:25:34.222802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:10.590 21:25:34 -- host/timeout.sh@57 -- # get_controller 00:18:10.590 21:25:34 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:10.590 21:25:34 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:10.847 21:25:34 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:10.847 21:25:34 -- host/timeout.sh@58 -- # get_bdev 00:18:10.847 21:25:34 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:10.847 21:25:34 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:11.105 21:25:34 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:11.105 21:25:34 -- host/timeout.sh@61 -- # sleep 5 00:18:13.004 [2024-11-28 21:25:36.222926] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.004 [2024-11-28 21:25:36.223047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.004 [2024-11-28 21:25:36.223095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.004 [2024-11-28 21:25:36.223112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x879610 with addr=10.0.0.2, port=4420 00:18:13.004 [2024-11-28 21:25:36.223125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x879610 is same with the state(5) to be set 00:18:13.004 [2024-11-28 21:25:36.223161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x879610 (9): Bad file descriptor 00:18:13.004 [2024-11-28 21:25:36.223181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.004 [2024-11-28 21:25:36.223191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:13.004 [2024-11-28 21:25:36.223202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.004 [2024-11-28 21:25:36.223229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:13.004 [2024-11-28 21:25:36.223241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.907 [2024-11-28 21:25:38.223563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.907 [2024-11-28 21:25:38.223627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.907 [2024-11-28 21:25:38.223654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:14.907 [2024-11-28 21:25:38.223664] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:14.907 [2024-11-28 21:25:38.223692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:15.843 00:18:15.843 Latency(us) 00:18:15.843 [2024-11-28T21:25:39.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.843 [2024-11-28T21:25:39.586Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.843 Verification LBA range: start 0x0 length 0x4000 00:18:15.843 NVMe0n1 : 8.13 1923.15 7.51 15.74 0.00 66088.64 2964.01 7046430.72 00:18:15.843 [2024-11-28T21:25:39.586Z] =================================================================================================================== 00:18:15.843 [2024-11-28T21:25:39.586Z] Total : 1923.15 7.51 15.74 0.00 66088.64 2964.01 7046430.72 00:18:15.843 0 00:18:16.102 21:25:39 -- host/timeout.sh@62 -- # get_controller 00:18:16.102 21:25:39 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:16.102 21:25:39 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:16.361 21:25:39 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:16.361 21:25:39 -- host/timeout.sh@63 -- # get_bdev 00:18:16.361 21:25:39 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:16.361 21:25:39 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:16.620 21:25:40 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:16.620 21:25:40 -- host/timeout.sh@65 -- # wait 85175 00:18:16.620 21:25:40 -- host/timeout.sh@67 -- # killprocess 85157 00:18:16.620 21:25:40 -- common/autotest_common.sh@936 -- # '[' -z 85157 ']' 00:18:16.620 21:25:40 -- common/autotest_common.sh@940 -- # kill -0 85157 00:18:16.620 21:25:40 -- common/autotest_common.sh@941 -- # uname 00:18:16.620 21:25:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.620 21:25:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85157 00:18:16.620 killing process with pid 85157 00:18:16.620 Received shutdown signal, test time was about 9.186875 seconds 00:18:16.620 00:18:16.620 Latency(us) 00:18:16.620 [2024-11-28T21:25:40.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.620 [2024-11-28T21:25:40.363Z] =================================================================================================================== 00:18:16.621 [2024-11-28T21:25:40.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.621 21:25:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:16.621 21:25:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:16.621 21:25:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85157' 00:18:16.621 21:25:40 -- common/autotest_common.sh@955 -- # kill 85157 00:18:16.621 21:25:40 -- common/autotest_common.sh@960 -- # wait 85157 00:18:16.879 21:25:40 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.138 [2024-11-28 21:25:40.628313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.138 21:25:40 -- host/timeout.sh@74 -- # bdevperf_pid=85303 00:18:17.138 21:25:40 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:17.138 21:25:40 -- host/timeout.sh@76 -- # waitforlisten 85303 /var/tmp/bdevperf.sock 00:18:17.138 21:25:40 -- common/autotest_common.sh@829 -- # '[' -z 85303 ']' 00:18:17.138 21:25:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.138 21:25:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.138 21:25:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.138 21:25:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.138 21:25:40 -- common/autotest_common.sh@10 -- # set +x 00:18:17.138 [2024-11-28 21:25:40.695620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:17.138 [2024-11-28 21:25:40.695722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85303 ] 00:18:17.138 [2024-11-28 21:25:40.832294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.138 [2024-11-28 21:25:40.865359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.075 21:25:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.075 21:25:41 -- common/autotest_common.sh@862 -- # return 0 00:18:18.075 21:25:41 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:18.334 21:25:41 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:18.593 NVMe0n1 00:18:18.593 21:25:42 -- host/timeout.sh@84 -- # rpc_pid=85322 00:18:18.593 21:25:42 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.593 21:25:42 -- host/timeout.sh@86 -- # sleep 1 00:18:18.593 Running I/O for 10 seconds... 00:18:19.554 21:25:43 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.816 [2024-11-28 21:25:43.380100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba51b0 is same with the state(5) to be set 00:18:19.816 [2024-11-28 21:25:43.380900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.380935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.816 [2024-11-28 21:25:43.380958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.380968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.816 [2024-11-28 21:25:43.380979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.381006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.816 [2024-11-28 21:25:43.381044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.381055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.816 [2024-11-28 21:25:43.381065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.381074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.816 [2024-11-28 21:25:43.381085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.381094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.816 [2024-11-28 21:25:43.381105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.816 [2024-11-28 21:25:43.381113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.381986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.381996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.817 [2024-11-28 21:25:43.382659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.817 [2024-11-28 21:25:43.382679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.817 [2024-11-28 21:25:43.382697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.817 [2024-11-28 21:25:43.382716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.382735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.382746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.817 [2024-11-28 21:25:43.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.817 [2024-11-28 21:25:43.383514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.817 [2024-11-28 21:25:43.383526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.383535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.383546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.383555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.383567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.383782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.383807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.383817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.383828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.383966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.383980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.383992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.384101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.384254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.384771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.384779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.385268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.385605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.385897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.385917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.818 [2024-11-28 21:25:43.385936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.818 [2024-11-28 21:25:43.385947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.818 [2024-11-28 21:25:43.385956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.385966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.386128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.386372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.386398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.386409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.386418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.386428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.386437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.386449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.386468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.386477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.386487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.387203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.387223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.387608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.387626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.387768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.387777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.819 [2024-11-28 21:25:43.388192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.388797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.388808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.389061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.389088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.389098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.389110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.389130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.819 [2024-11-28 21:25:43.389139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.819 [2024-11-28 21:25:43.389150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.389161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.389423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.389442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.389462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.389481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.389700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.389725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.389744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.389763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.389775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.389784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.390101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.390647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.390677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.390696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.390994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.391027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.391049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.820 [2024-11-28 21:25:43.391069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.820 [2024-11-28 21:25:43.391355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecb870 is same with the state(5) to be set 00:18:19.820 [2024-11-28 21:25:43.391384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:19.820 [2024-11-28 21:25:43.391392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:19.820 [2024-11-28 21:25:43.391401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126440 len:8 PRP1 0x0 PRP2 0x0 00:18:19.820 [2024-11-28 21:25:43.391410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391657] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ecb870 was disconnected and freed. reset controller. 00:18:19.820 [2024-11-28 21:25:43.391861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.820 [2024-11-28 21:25:43.391953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.820 [2024-11-28 21:25:43.391981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.820 [2024-11-28 21:25:43.391990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.820 [2024-11-28 21:25:43.391998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.821 [2024-11-28 21:25:43.392265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.821 [2024-11-28 21:25:43.392288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.821 [2024-11-28 21:25:43.392298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:19.821 [2024-11-28 21:25:43.392714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:19.821 [2024-11-28 21:25:43.392750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:19.821 [2024-11-28 21:25:43.392952] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.821 [2024-11-28 21:25:43.393167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.821 [2024-11-28 21:25:43.393513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.821 [2024-11-28 21:25:43.393542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed0450 with addr=10.0.0.2, port=4420 00:18:19.821 [2024-11-28 21:25:43.393554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:19.821 [2024-11-28 21:25:43.393575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:19.821 [2024-11-28 21:25:43.393591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:19.821 [2024-11-28 21:25:43.393600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:19.821 [2024-11-28 21:25:43.393610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:19.821 [2024-11-28 21:25:43.393854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.821 [2024-11-28 21:25:43.393979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:19.821 21:25:43 -- host/timeout.sh@90 -- # sleep 1 00:18:20.756 [2024-11-28 21:25:44.394238] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.756 [2024-11-28 21:25:44.394360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.757 [2024-11-28 21:25:44.394400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.757 [2024-11-28 21:25:44.394416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed0450 with addr=10.0.0.2, port=4420 00:18:20.757 [2024-11-28 21:25:44.394429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:20.757 [2024-11-28 21:25:44.394453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:20.757 [2024-11-28 21:25:44.394471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.757 [2024-11-28 21:25:44.394480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:20.757 [2024-11-28 21:25:44.394490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.757 [2024-11-28 21:25:44.394822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:20.757 [2024-11-28 21:25:44.394849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.757 21:25:44 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.015 [2024-11-28 21:25:44.659295] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.015 21:25:44 -- host/timeout.sh@92 -- # wait 85322 00:18:21.948 [2024-11-28 21:25:45.415125] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.068 00:18:30.068 Latency(us) 00:18:30.068 [2024-11-28T21:25:53.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.068 [2024-11-28T21:25:53.811Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.068 Verification LBA range: start 0x0 length 0x4000 00:18:30.068 NVMe0n1 : 10.01 9979.34 38.98 0.00 0.00 12811.16 1079.85 3035150.89 00:18:30.068 [2024-11-28T21:25:53.811Z] =================================================================================================================== 00:18:30.068 [2024-11-28T21:25:53.811Z] Total : 9979.34 38.98 0.00 0.00 12811.16 1079.85 3035150.89 00:18:30.068 0 00:18:30.068 21:25:52 -- host/timeout.sh@97 -- # rpc_pid=85431 00:18:30.068 21:25:52 -- host/timeout.sh@98 -- # sleep 1 00:18:30.068 21:25:52 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.068 Running I/O for 10 seconds... 00:18:30.068 21:25:53 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.068 [2024-11-28 21:25:53.542005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2d80 is same with the state(5) to be set 00:18:30.068 [2024-11-28 21:25:53.542363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.068 [2024-11-28 21:25:53.542684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.068 [2024-11-28 21:25:53.542693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.542884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.542962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.542991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.542999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.543551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.543623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.543632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.544676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.544697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.544717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.544868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.544905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.545566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.545594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.545701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.545743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.545754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.069 [2024-11-28 21:25:53.545895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.546171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.546191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.546307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.546322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.069 [2024-11-28 21:25:53.546336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.069 [2024-11-28 21:25:53.546345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.546590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.546967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.546978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.546987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.547155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.547318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.547481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.547717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.547739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.547761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.547781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.547793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.548072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.548210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.548231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.548314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.070 [2024-11-28 21:25:53.548377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.070 [2024-11-28 21:25:53.548943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.548954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83770 is same with the state(5) to be set 00:18:30.070 [2024-11-28 21:25:53.549135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.070 [2024-11-28 21:25:53.549216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.070 [2024-11-28 21:25:53.549226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128600 len:8 PRP1 0x0 PRP2 0x0 00:18:30.070 [2024-11-28 21:25:53.549236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.549353] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f83770 was disconnected and freed. reset controller. 00:18:30.070 [2024-11-28 21:25:53.549569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.070 [2024-11-28 21:25:53.549599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.549611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.070 [2024-11-28 21:25:53.549698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.549713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.070 [2024-11-28 21:25:53.549723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.549732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.070 [2024-11-28 21:25:53.549741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.070 [2024-11-28 21:25:53.549750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:30.070 [2024-11-28 21:25:53.550447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.070 [2024-11-28 21:25:53.550489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:30.070 [2024-11-28 21:25:53.550595] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.070 [2024-11-28 21:25:53.550748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.070 [2024-11-28 21:25:53.551057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.070 [2024-11-28 21:25:53.551089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed0450 with addr=10.0.0.2, port=4420 00:18:30.070 [2024-11-28 21:25:53.551102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:30.070 [2024-11-28 21:25:53.551124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:30.070 [2024-11-28 21:25:53.551153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.070 [2024-11-28 21:25:53.551169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.070 [2024-11-28 21:25:53.551179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.070 [2024-11-28 21:25:53.551201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.070 [2024-11-28 21:25:53.551213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.070 21:25:53 -- host/timeout.sh@101 -- # sleep 3 00:18:31.005 [2024-11-28 21:25:54.551349] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.005 [2024-11-28 21:25:54.551472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.005 [2024-11-28 21:25:54.551529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.005 [2024-11-28 21:25:54.551544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed0450 with addr=10.0.0.2, port=4420 00:18:31.006 [2024-11-28 21:25:54.551557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:31.006 [2024-11-28 21:25:54.551583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:31.006 [2024-11-28 21:25:54.551601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.006 [2024-11-28 21:25:54.551609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:31.006 [2024-11-28 21:25:54.551619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.006 [2024-11-28 21:25:54.551661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.006 [2024-11-28 21:25:54.551843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.942 [2024-11-28 21:25:55.552220] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.942 [2024-11-28 21:25:55.552341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.942 [2024-11-28 21:25:55.552381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.942 [2024-11-28 21:25:55.552397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed0450 with addr=10.0.0.2, port=4420 00:18:31.942 [2024-11-28 21:25:55.552409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:31.942 [2024-11-28 21:25:55.552435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:31.942 [2024-11-28 21:25:55.552452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.942 [2024-11-28 21:25:55.552461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:31.942 [2024-11-28 21:25:55.552471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.942 [2024-11-28 21:25:55.552497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.942 [2024-11-28 21:25:55.552509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:32.877 [2024-11-28 21:25:56.553472] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.877 [2024-11-28 21:25:56.553598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.877 [2024-11-28 21:25:56.553639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.877 [2024-11-28 21:25:56.553655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed0450 with addr=10.0.0.2, port=4420 00:18:32.877 [2024-11-28 21:25:56.553668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed0450 is same with the state(5) to be set 00:18:32.877 [2024-11-28 21:25:56.554232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed0450 (9): Bad file descriptor 00:18:32.877 [2024-11-28 21:25:56.554400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:32.877 [2024-11-28 21:25:56.554601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:32.877 [2024-11-28 21:25:56.554615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:32.878 [2024-11-28 21:25:56.557134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.878 [2024-11-28 21:25:56.557185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:32.878 21:25:56 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.136 [2024-11-28 21:25:56.784894] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.136 21:25:56 -- host/timeout.sh@103 -- # wait 85431 00:18:34.073 [2024-11-28 21:25:57.589838] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.344 00:18:39.344 Latency(us) 00:18:39.344 [2024-11-28T21:26:03.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.344 [2024-11-28T21:26:03.087Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.344 Verification LBA range: start 0x0 length 0x4000 00:18:39.344 NVMe0n1 : 10.01 8554.49 33.42 6057.92 0.00 8741.70 595.78 3019898.88 00:18:39.344 [2024-11-28T21:26:03.087Z] =================================================================================================================== 00:18:39.344 [2024-11-28T21:26:03.087Z] Total : 8554.49 33.42 6057.92 0.00 8741.70 0.00 3019898.88 00:18:39.344 0 00:18:39.344 21:26:02 -- host/timeout.sh@105 -- # killprocess 85303 00:18:39.344 21:26:02 -- common/autotest_common.sh@936 -- # '[' -z 85303 ']' 00:18:39.344 21:26:02 -- common/autotest_common.sh@940 -- # kill -0 85303 00:18:39.344 21:26:02 -- common/autotest_common.sh@941 -- # uname 00:18:39.344 21:26:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.344 21:26:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85303 00:18:39.344 killing process with pid 85303 00:18:39.344 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.344 00:18:39.344 Latency(us) 00:18:39.344 [2024-11-28T21:26:03.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.344 [2024-11-28T21:26:03.087Z] =================================================================================================================== 00:18:39.344 [2024-11-28T21:26:03.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.344 21:26:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:39.344 21:26:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:39.344 21:26:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85303' 00:18:39.344 21:26:02 -- common/autotest_common.sh@955 -- # kill 85303 00:18:39.344 21:26:02 -- common/autotest_common.sh@960 -- # wait 85303 00:18:39.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.344 21:26:02 -- host/timeout.sh@110 -- # bdevperf_pid=85541 00:18:39.344 21:26:02 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:39.344 21:26:02 -- host/timeout.sh@112 -- # waitforlisten 85541 /var/tmp/bdevperf.sock 00:18:39.344 21:26:02 -- common/autotest_common.sh@829 -- # '[' -z 85541 ']' 00:18:39.344 21:26:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.344 21:26:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.344 21:26:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.344 21:26:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.344 21:26:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.344 [2024-11-28 21:26:02.662945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:39.344 [2024-11-28 21:26:02.663309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85541 ] 00:18:39.344 [2024-11-28 21:26:02.803957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.344 [2024-11-28 21:26:02.839516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.912 21:26:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.912 21:26:03 -- common/autotest_common.sh@862 -- # return 0 00:18:39.912 21:26:03 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85541 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:39.912 21:26:03 -- host/timeout.sh@116 -- # dtrace_pid=85557 00:18:39.912 21:26:03 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:40.171 21:26:03 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:40.738 NVMe0n1 00:18:40.738 21:26:04 -- host/timeout.sh@124 -- # rpc_pid=85604 00:18:40.738 21:26:04 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.738 21:26:04 -- host/timeout.sh@125 -- # sleep 1 00:18:40.738 Running I/O for 10 seconds... 00:18:41.706 21:26:05 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.707 [2024-11-28 21:26:05.420655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.420723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.420762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.420783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.420792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.420802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.420811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.420820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.420829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.420839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.420848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.420873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.421980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.421989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.422786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.422797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.423940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.423949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.424953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.425985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.425994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.707 [2024-11-28 21:26:05.426740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.707 [2024-11-28 21:26:05.426750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.426759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.426770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.426900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.427920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.428803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.428826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.429986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.429997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.430942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.430953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.431152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.431168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.708 [2024-11-28 21:26:05.431178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.431376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:41.708 [2024-11-28 21:26:05.431628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:41.708 [2024-11-28 21:26:05.431648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0 00:18:41.708 [2024-11-28 21:26:05.431659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.431703] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x142d9f0 was disconnected and freed. reset controller. 00:18:41.708 [2024-11-28 21:26:05.432052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.708 [2024-11-28 21:26:05.432088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.432100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.708 [2024-11-28 21:26:05.432109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.432119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.708 [2024-11-28 21:26:05.432129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.432139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.708 [2024-11-28 21:26:05.432163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.708 [2024-11-28 21:26:05.432172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432470 is same with the state(5) to be set 00:18:41.708 [2024-11-28 21:26:05.432721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.708 [2024-11-28 21:26:05.432770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432470 (9): Bad file descriptor 00:18:41.708 [2024-11-28 21:26:05.433050] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.708 [2024-11-28 21:26:05.433134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.708 [2024-11-28 21:26:05.433422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.708 [2024-11-28 21:26:05.433468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432470 with addr=10.0.0.2, port=4420 00:18:41.708 [2024-11-28 21:26:05.433480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432470 is same with the state(5) to be set 00:18:41.708 [2024-11-28 21:26:05.433502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432470 (9): Bad file descriptor 00:18:41.708 [2024-11-28 21:26:05.433519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:41.708 [2024-11-28 21:26:05.433527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:41.708 [2024-11-28 21:26:05.433538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:41.708 [2024-11-28 21:26:05.433558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.708 [2024-11-28 21:26:05.433695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.708 21:26:05 -- host/timeout.sh@128 -- # wait 85604 00:18:44.242 [2024-11-28 21:26:07.434120] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.242 [2024-11-28 21:26:07.434241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.242 [2024-11-28 21:26:07.434292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.242 [2024-11-28 21:26:07.434309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432470 with addr=10.0.0.2, port=4420 00:18:44.243 [2024-11-28 21:26:07.434323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432470 is same with the state(5) to be set 00:18:44.243 [2024-11-28 21:26:07.434350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432470 (9): Bad file descriptor 00:18:44.243 [2024-11-28 21:26:07.434368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.243 [2024-11-28 21:26:07.434378] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:44.243 [2024-11-28 21:26:07.434388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.243 [2024-11-28 21:26:07.434415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.243 [2024-11-28 21:26:07.434441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.147 [2024-11-28 21:26:09.434918] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.147 [2024-11-28 21:26:09.435067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.147 [2024-11-28 21:26:09.435128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.147 [2024-11-28 21:26:09.435173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432470 with addr=10.0.0.2, port=4420 00:18:46.147 [2024-11-28 21:26:09.435188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432470 is same with the state(5) to be set 00:18:46.147 [2024-11-28 21:26:09.435214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432470 (9): Bad file descriptor 00:18:46.147 [2024-11-28 21:26:09.435233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.147 [2024-11-28 21:26:09.435243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.147 [2024-11-28 21:26:09.435253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.147 [2024-11-28 21:26:09.435281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.147 [2024-11-28 21:26:09.435293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.051 [2024-11-28 21:26:11.435359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.051 [2024-11-28 21:26:11.435411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.051 [2024-11-28 21:26:11.435436] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:48.051 [2024-11-28 21:26:11.435462] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:48.051 [2024-11-28 21:26:11.435503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.987 00:18:48.987 Latency(us) 00:18:48.987 [2024-11-28T21:26:12.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.988 [2024-11-28T21:26:12.731Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:48.988 NVMe0n1 : 8.14 2247.39 8.78 15.73 0.00 56454.82 7119.59 7046430.72 00:18:48.988 [2024-11-28T21:26:12.731Z] =================================================================================================================== 00:18:48.988 [2024-11-28T21:26:12.731Z] Total : 2247.39 8.78 15.73 0.00 56454.82 7119.59 7046430.72 00:18:48.988 0 00:18:48.988 21:26:12 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.988 Attaching 5 probes... 00:18:48.988 1266.901626: reset bdev controller NVMe0 00:18:48.988 1266.995040: reconnect bdev controller NVMe0 00:18:48.988 3268.137218: reconnect delay bdev controller NVMe0 00:18:48.988 3268.175346: reconnect bdev controller NVMe0 00:18:48.988 5268.968793: reconnect delay bdev controller NVMe0 00:18:48.988 5269.003828: reconnect bdev controller NVMe0 00:18:48.988 7269.506803: reconnect delay bdev controller NVMe0 00:18:48.988 7269.541928: reconnect bdev controller NVMe0 00:18:48.988 21:26:12 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:48.988 21:26:12 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:48.988 21:26:12 -- host/timeout.sh@136 -- # kill 85557 00:18:48.988 21:26:12 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.988 21:26:12 -- host/timeout.sh@139 -- # killprocess 85541 00:18:48.988 21:26:12 -- common/autotest_common.sh@936 -- # '[' -z 85541 ']' 00:18:48.988 21:26:12 -- common/autotest_common.sh@940 -- # kill -0 85541 00:18:48.988 21:26:12 -- common/autotest_common.sh@941 -- # uname 00:18:48.988 21:26:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.988 21:26:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85541 00:18:48.988 killing process with pid 85541 00:18:48.988 Received shutdown signal, test time was about 8.208760 seconds 00:18:48.988 00:18:48.988 Latency(us) 00:18:48.988 [2024-11-28T21:26:12.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.988 [2024-11-28T21:26:12.731Z] =================================================================================================================== 00:18:48.988 [2024-11-28T21:26:12.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.988 21:26:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:48.988 21:26:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:48.988 21:26:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85541' 00:18:48.988 21:26:12 -- common/autotest_common.sh@955 -- # kill 85541 00:18:48.988 21:26:12 -- common/autotest_common.sh@960 -- # wait 85541 00:18:48.988 21:26:12 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.247 21:26:12 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:49.247 21:26:12 -- host/timeout.sh@145 -- # nvmftestfini 00:18:49.247 21:26:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:49.247 21:26:12 -- nvmf/common.sh@116 -- # sync 00:18:49.247 21:26:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:49.247 21:26:12 -- nvmf/common.sh@119 -- # set +e 00:18:49.247 21:26:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:49.247 21:26:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:49.247 rmmod nvme_tcp 00:18:49.247 rmmod nvme_fabrics 00:18:49.247 rmmod nvme_keyring 00:18:49.506 21:26:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:49.506 21:26:13 -- nvmf/common.sh@123 -- # set -e 00:18:49.506 21:26:13 -- nvmf/common.sh@124 -- # return 0 00:18:49.506 21:26:13 -- nvmf/common.sh@477 -- # '[' -n 85102 ']' 00:18:49.506 21:26:13 -- nvmf/common.sh@478 -- # killprocess 85102 00:18:49.506 21:26:13 -- common/autotest_common.sh@936 -- # '[' -z 85102 ']' 00:18:49.506 21:26:13 -- common/autotest_common.sh@940 -- # kill -0 85102 00:18:49.506 21:26:13 -- common/autotest_common.sh@941 -- # uname 00:18:49.506 21:26:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.506 21:26:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85102 00:18:49.506 21:26:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:49.506 21:26:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:49.506 killing process with pid 85102 00:18:49.506 21:26:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85102' 00:18:49.506 21:26:13 -- common/autotest_common.sh@955 -- # kill 85102 00:18:49.506 21:26:13 -- common/autotest_common.sh@960 -- # wait 85102 00:18:49.506 21:26:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:49.506 21:26:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:49.506 21:26:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:49.506 21:26:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.506 21:26:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:49.506 21:26:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.506 21:26:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.506 21:26:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.506 21:26:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:49.506 00:18:49.506 real 0m46.642s 00:18:49.506 user 2m17.255s 00:18:49.506 sys 0m5.187s 00:18:49.506 21:26:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:49.506 21:26:13 -- common/autotest_common.sh@10 -- # set +x 00:18:49.506 ************************************ 00:18:49.506 END TEST nvmf_timeout 00:18:49.506 ************************************ 00:18:49.765 21:26:13 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:49.765 21:26:13 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:49.765 21:26:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.765 21:26:13 -- common/autotest_common.sh@10 -- # set +x 00:18:49.765 21:26:13 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:49.765 ************************************ 00:18:49.765 END TEST nvmf_tcp 00:18:49.765 ************************************ 00:18:49.765 00:18:49.765 real 10m22.867s 00:18:49.765 user 28m58.848s 00:18:49.765 sys 3m24.464s 00:18:49.765 21:26:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:49.765 21:26:13 -- common/autotest_common.sh@10 -- # set +x 00:18:49.765 21:26:13 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:49.765 21:26:13 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:49.765 21:26:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:49.765 21:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:49.765 21:26:13 -- common/autotest_common.sh@10 -- # set +x 00:18:49.765 ************************************ 00:18:49.765 START TEST nvmf_dif 00:18:49.765 ************************************ 00:18:49.765 21:26:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:49.765 * Looking for test storage... 00:18:49.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:49.765 21:26:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:49.765 21:26:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:49.765 21:26:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:50.023 21:26:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:50.023 21:26:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:50.023 21:26:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:50.023 21:26:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:50.023 21:26:13 -- scripts/common.sh@335 -- # IFS=.-: 00:18:50.023 21:26:13 -- scripts/common.sh@335 -- # read -ra ver1 00:18:50.023 21:26:13 -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.023 21:26:13 -- scripts/common.sh@336 -- # read -ra ver2 00:18:50.023 21:26:13 -- scripts/common.sh@337 -- # local 'op=<' 00:18:50.023 21:26:13 -- scripts/common.sh@339 -- # ver1_l=2 00:18:50.023 21:26:13 -- scripts/common.sh@340 -- # ver2_l=1 00:18:50.023 21:26:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:50.023 21:26:13 -- scripts/common.sh@343 -- # case "$op" in 00:18:50.023 21:26:13 -- scripts/common.sh@344 -- # : 1 00:18:50.023 21:26:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:50.023 21:26:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.023 21:26:13 -- scripts/common.sh@364 -- # decimal 1 00:18:50.023 21:26:13 -- scripts/common.sh@352 -- # local d=1 00:18:50.023 21:26:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.023 21:26:13 -- scripts/common.sh@354 -- # echo 1 00:18:50.023 21:26:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:50.023 21:26:13 -- scripts/common.sh@365 -- # decimal 2 00:18:50.023 21:26:13 -- scripts/common.sh@352 -- # local d=2 00:18:50.024 21:26:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.024 21:26:13 -- scripts/common.sh@354 -- # echo 2 00:18:50.024 21:26:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:50.024 21:26:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:50.024 21:26:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:50.024 21:26:13 -- scripts/common.sh@367 -- # return 0 00:18:50.024 21:26:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.024 21:26:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:50.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.024 --rc genhtml_branch_coverage=1 00:18:50.024 --rc genhtml_function_coverage=1 00:18:50.024 --rc genhtml_legend=1 00:18:50.024 --rc geninfo_all_blocks=1 00:18:50.024 --rc geninfo_unexecuted_blocks=1 00:18:50.024 00:18:50.024 ' 00:18:50.024 21:26:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:50.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.024 --rc genhtml_branch_coverage=1 00:18:50.024 --rc genhtml_function_coverage=1 00:18:50.024 --rc genhtml_legend=1 00:18:50.024 --rc geninfo_all_blocks=1 00:18:50.024 --rc geninfo_unexecuted_blocks=1 00:18:50.024 00:18:50.024 ' 00:18:50.024 21:26:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:50.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.024 --rc genhtml_branch_coverage=1 00:18:50.024 --rc genhtml_function_coverage=1 00:18:50.024 --rc genhtml_legend=1 00:18:50.024 --rc geninfo_all_blocks=1 00:18:50.024 --rc geninfo_unexecuted_blocks=1 00:18:50.024 00:18:50.024 ' 00:18:50.024 21:26:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:50.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.024 --rc genhtml_branch_coverage=1 00:18:50.024 --rc genhtml_function_coverage=1 00:18:50.024 --rc genhtml_legend=1 00:18:50.024 --rc geninfo_all_blocks=1 00:18:50.024 --rc geninfo_unexecuted_blocks=1 00:18:50.024 00:18:50.024 ' 00:18:50.024 21:26:13 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.024 21:26:13 -- nvmf/common.sh@7 -- # uname -s 00:18:50.024 21:26:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.024 21:26:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.024 21:26:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.024 21:26:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.024 21:26:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.024 21:26:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.024 21:26:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.024 21:26:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.024 21:26:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.024 21:26:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.024 21:26:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:18:50.024 21:26:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:18:50.024 21:26:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.024 21:26:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.024 21:26:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.024 21:26:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.024 21:26:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.024 21:26:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.024 21:26:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.024 21:26:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.024 21:26:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.024 21:26:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.024 21:26:13 -- paths/export.sh@5 -- # export PATH 00:18:50.024 21:26:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.024 21:26:13 -- nvmf/common.sh@46 -- # : 0 00:18:50.024 21:26:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:50.024 21:26:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:50.024 21:26:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:50.024 21:26:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.024 21:26:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.024 21:26:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:50.024 21:26:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:50.024 21:26:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:50.024 21:26:13 -- target/dif.sh@15 -- # NULL_META=16 00:18:50.024 21:26:13 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:50.024 21:26:13 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:50.024 21:26:13 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:50.024 21:26:13 -- target/dif.sh@135 -- # nvmftestinit 00:18:50.024 21:26:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:50.024 21:26:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.024 21:26:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:50.024 21:26:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:50.024 21:26:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:50.024 21:26:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.024 21:26:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:50.024 21:26:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.024 21:26:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:50.024 21:26:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:50.024 21:26:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:50.024 21:26:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:50.024 21:26:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:50.024 21:26:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:50.024 21:26:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.024 21:26:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.024 21:26:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:50.024 21:26:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:50.024 21:26:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.024 21:26:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.024 21:26:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.024 21:26:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.024 21:26:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.024 21:26:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.024 21:26:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.024 21:26:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.024 21:26:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:50.024 21:26:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:50.024 Cannot find device "nvmf_tgt_br" 00:18:50.024 21:26:13 -- nvmf/common.sh@154 -- # true 00:18:50.024 21:26:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.024 Cannot find device "nvmf_tgt_br2" 00:18:50.024 21:26:13 -- nvmf/common.sh@155 -- # true 00:18:50.024 21:26:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:50.024 21:26:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:50.024 Cannot find device "nvmf_tgt_br" 00:18:50.024 21:26:13 -- nvmf/common.sh@157 -- # true 00:18:50.024 21:26:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:50.024 Cannot find device "nvmf_tgt_br2" 00:18:50.024 21:26:13 -- nvmf/common.sh@158 -- # true 00:18:50.024 21:26:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:50.024 21:26:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:50.024 21:26:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.024 21:26:13 -- nvmf/common.sh@161 -- # true 00:18:50.024 21:26:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.025 21:26:13 -- nvmf/common.sh@162 -- # true 00:18:50.025 21:26:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.025 21:26:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.025 21:26:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.025 21:26:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.025 21:26:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.025 21:26:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.283 21:26:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.283 21:26:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:50.283 21:26:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:50.283 21:26:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:50.283 21:26:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:50.283 21:26:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:50.283 21:26:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:50.283 21:26:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.283 21:26:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.283 21:26:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.283 21:26:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:50.283 21:26:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:50.283 21:26:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.283 21:26:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.283 21:26:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.283 21:26:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.283 21:26:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.283 21:26:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:50.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:18:50.283 00:18:50.283 --- 10.0.0.2 ping statistics --- 00:18:50.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.283 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:50.283 21:26:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:50.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:18:50.283 00:18:50.283 --- 10.0.0.3 ping statistics --- 00:18:50.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.283 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:50.283 21:26:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:50.283 00:18:50.283 --- 10.0.0.1 ping statistics --- 00:18:50.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.284 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:50.284 21:26:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.284 21:26:13 -- nvmf/common.sh@421 -- # return 0 00:18:50.284 21:26:13 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:50.284 21:26:13 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:50.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:50.542 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:50.542 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:50.801 21:26:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.801 21:26:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:50.801 21:26:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:50.801 21:26:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.801 21:26:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:50.801 21:26:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:50.801 21:26:14 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:50.801 21:26:14 -- target/dif.sh@137 -- # nvmfappstart 00:18:50.801 21:26:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:50.801 21:26:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:50.801 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:18:50.801 21:26:14 -- nvmf/common.sh@469 -- # nvmfpid=86053 00:18:50.801 21:26:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:50.801 21:26:14 -- nvmf/common.sh@470 -- # waitforlisten 86053 00:18:50.801 21:26:14 -- common/autotest_common.sh@829 -- # '[' -z 86053 ']' 00:18:50.802 21:26:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.802 21:26:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.802 21:26:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.802 21:26:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.802 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:18:50.802 [2024-11-28 21:26:14.388142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:50.802 [2024-11-28 21:26:14.388513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.802 [2024-11-28 21:26:14.531253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.061 [2024-11-28 21:26:14.572298] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:51.061 [2024-11-28 21:26:14.572737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.061 [2024-11-28 21:26:14.572762] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.061 [2024-11-28 21:26:14.572776] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.061 [2024-11-28 21:26:14.572808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.995 21:26:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.995 21:26:15 -- common/autotest_common.sh@862 -- # return 0 00:18:51.995 21:26:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:51.995 21:26:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 21:26:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.995 21:26:15 -- target/dif.sh@139 -- # create_transport 00:18:51.995 21:26:15 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:51.995 21:26:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 [2024-11-28 21:26:15.432596] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.995 21:26:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.995 21:26:15 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:51.995 21:26:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:51.995 21:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 ************************************ 00:18:51.995 START TEST fio_dif_1_default 00:18:51.995 ************************************ 00:18:51.995 21:26:15 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:18:51.995 21:26:15 -- target/dif.sh@86 -- # create_subsystems 0 00:18:51.995 21:26:15 -- target/dif.sh@28 -- # local sub 00:18:51.995 21:26:15 -- target/dif.sh@30 -- # for sub in "$@" 00:18:51.995 21:26:15 -- target/dif.sh@31 -- # create_subsystem 0 00:18:51.995 21:26:15 -- target/dif.sh@18 -- # local sub_id=0 00:18:51.995 21:26:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:51.995 21:26:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 bdev_null0 00:18:51.995 21:26:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.995 21:26:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:51.995 21:26:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 21:26:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.995 21:26:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:51.995 21:26:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 21:26:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.995 21:26:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:51.995 21:26:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.995 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 [2024-11-28 21:26:15.476713] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.995 21:26:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.995 21:26:15 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:51.995 21:26:15 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:51.996 21:26:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:51.996 21:26:15 -- nvmf/common.sh@520 -- # config=() 00:18:51.996 21:26:15 -- nvmf/common.sh@520 -- # local subsystem config 00:18:51.996 21:26:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:51.996 21:26:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.996 21:26:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:51.996 { 00:18:51.996 "params": { 00:18:51.996 "name": "Nvme$subsystem", 00:18:51.996 "trtype": "$TEST_TRANSPORT", 00:18:51.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.996 "adrfam": "ipv4", 00:18:51.996 "trsvcid": "$NVMF_PORT", 00:18:51.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.996 "hdgst": ${hdgst:-false}, 00:18:51.996 "ddgst": ${ddgst:-false} 00:18:51.996 }, 00:18:51.996 "method": "bdev_nvme_attach_controller" 00:18:51.996 } 00:18:51.996 EOF 00:18:51.996 )") 00:18:51.996 21:26:15 -- target/dif.sh@82 -- # gen_fio_conf 00:18:51.996 21:26:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.996 21:26:15 -- target/dif.sh@54 -- # local file 00:18:51.996 21:26:15 -- target/dif.sh@56 -- # cat 00:18:51.996 21:26:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:51.996 21:26:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:51.996 21:26:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:51.996 21:26:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.996 21:26:15 -- common/autotest_common.sh@1330 -- # shift 00:18:51.996 21:26:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:51.996 21:26:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.996 21:26:15 -- nvmf/common.sh@542 -- # cat 00:18:51.996 21:26:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:51.996 21:26:15 -- target/dif.sh@72 -- # (( file <= files )) 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:51.996 21:26:15 -- nvmf/common.sh@544 -- # jq . 00:18:51.996 21:26:15 -- nvmf/common.sh@545 -- # IFS=, 00:18:51.996 21:26:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:51.996 "params": { 00:18:51.996 "name": "Nvme0", 00:18:51.996 "trtype": "tcp", 00:18:51.996 "traddr": "10.0.0.2", 00:18:51.996 "adrfam": "ipv4", 00:18:51.996 "trsvcid": "4420", 00:18:51.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:51.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:51.996 "hdgst": false, 00:18:51.996 "ddgst": false 00:18:51.996 }, 00:18:51.996 "method": "bdev_nvme_attach_controller" 00:18:51.996 }' 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:51.996 21:26:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:51.996 21:26:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:51.996 21:26:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:51.996 21:26:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:51.996 21:26:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:51.996 21:26:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.996 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:51.996 fio-3.35 00:18:51.996 Starting 1 thread 00:18:52.565 [2024-11-28 21:26:16.018364] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:52.565 [2024-11-28 21:26:16.018475] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:02.572 00:19:02.572 filename0: (groupid=0, jobs=1): err= 0: pid=86119: Thu Nov 28 21:26:26 2024 00:19:02.572 read: IOPS=9417, BW=36.8MiB/s (38.6MB/s)(368MiB/10001msec) 00:19:02.572 slat (usec): min=5, max=110, avg= 7.90, stdev= 3.53 00:19:02.572 clat (usec): min=308, max=3680, avg=401.70, stdev=52.22 00:19:02.572 lat (usec): min=314, max=3708, avg=409.61, stdev=52.90 00:19:02.572 clat percentiles (usec): 00:19:02.572 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:19:02.572 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 404], 00:19:02.572 | 70.00th=[ 420], 80.00th=[ 441], 90.00th=[ 465], 95.00th=[ 486], 00:19:02.572 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 594], 00:19:02.572 | 99.99th=[ 1090] 00:19:02.572 bw ( KiB/s): min=36480, max=39136, per=100.00%, avg=37687.58, stdev=800.63, samples=19 00:19:02.572 iops : min= 9120, max= 9784, avg=9421.89, stdev=200.16, samples=19 00:19:02.572 lat (usec) : 500=96.84%, 750=3.15%, 1000=0.01% 00:19:02.572 lat (msec) : 2=0.01%, 4=0.01% 00:19:02.572 cpu : usr=85.77%, sys=12.48%, ctx=23, majf=0, minf=8 00:19:02.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.572 issued rwts: total=94188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:02.572 00:19:02.573 Run status group 0 (all jobs): 00:19:02.573 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=368MiB (386MB), run=10001-10001msec 00:19:02.573 21:26:26 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:02.573 21:26:26 -- target/dif.sh@43 -- # local sub 00:19:02.573 21:26:26 -- target/dif.sh@45 -- # for sub in "$@" 00:19:02.573 21:26:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:02.573 21:26:26 -- target/dif.sh@36 -- # local sub_id=0 00:19:02.573 21:26:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:02.573 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.573 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.573 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.573 21:26:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:02.573 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.573 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.573 ************************************ 00:19:02.573 END TEST fio_dif_1_default 00:19:02.573 ************************************ 00:19:02.573 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.573 00:19:02.573 real 0m10.861s 00:19:02.573 user 0m9.131s 00:19:02.573 sys 0m1.476s 00:19:02.573 21:26:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:02.573 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 21:26:26 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:02.832 21:26:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:02.832 21:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 ************************************ 00:19:02.832 START TEST fio_dif_1_multi_subsystems 00:19:02.832 ************************************ 00:19:02.832 21:26:26 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:19:02.832 21:26:26 -- target/dif.sh@92 -- # local files=1 00:19:02.832 21:26:26 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:02.832 21:26:26 -- target/dif.sh@28 -- # local sub 00:19:02.832 21:26:26 -- target/dif.sh@30 -- # for sub in "$@" 00:19:02.832 21:26:26 -- target/dif.sh@31 -- # create_subsystem 0 00:19:02.832 21:26:26 -- target/dif.sh@18 -- # local sub_id=0 00:19:02.832 21:26:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 bdev_null0 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 [2024-11-28 21:26:26.396243] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@30 -- # for sub in "$@" 00:19:02.832 21:26:26 -- target/dif.sh@31 -- # create_subsystem 1 00:19:02.832 21:26:26 -- target/dif.sh@18 -- # local sub_id=1 00:19:02.832 21:26:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 bdev_null1 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.832 21:26:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.832 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 21:26:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.832 21:26:26 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:02.832 21:26:26 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:02.832 21:26:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:02.832 21:26:26 -- nvmf/common.sh@520 -- # config=() 00:19:02.832 21:26:26 -- nvmf/common.sh@520 -- # local subsystem config 00:19:02.832 21:26:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:02.832 21:26:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:02.832 { 00:19:02.832 "params": { 00:19:02.832 "name": "Nvme$subsystem", 00:19:02.832 "trtype": "$TEST_TRANSPORT", 00:19:02.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.832 "adrfam": "ipv4", 00:19:02.832 "trsvcid": "$NVMF_PORT", 00:19:02.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.832 "hdgst": ${hdgst:-false}, 00:19:02.832 "ddgst": ${ddgst:-false} 00:19:02.832 }, 00:19:02.832 "method": "bdev_nvme_attach_controller" 00:19:02.832 } 00:19:02.832 EOF 00:19:02.832 )") 00:19:02.832 21:26:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:02.832 21:26:26 -- target/dif.sh@82 -- # gen_fio_conf 00:19:02.832 21:26:26 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:02.832 21:26:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:02.832 21:26:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.832 21:26:26 -- nvmf/common.sh@542 -- # cat 00:19:02.832 21:26:26 -- target/dif.sh@54 -- # local file 00:19:02.832 21:26:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:02.832 21:26:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.832 21:26:26 -- common/autotest_common.sh@1330 -- # shift 00:19:02.832 21:26:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:02.832 21:26:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.832 21:26:26 -- target/dif.sh@56 -- # cat 00:19:02.832 21:26:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.832 21:26:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:02.832 21:26:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:02.833 21:26:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:02.833 21:26:26 -- target/dif.sh@72 -- # (( file <= files )) 00:19:02.833 21:26:26 -- target/dif.sh@73 -- # cat 00:19:02.833 21:26:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:02.833 21:26:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:02.833 { 00:19:02.833 "params": { 00:19:02.833 "name": "Nvme$subsystem", 00:19:02.833 "trtype": "$TEST_TRANSPORT", 00:19:02.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.833 "adrfam": "ipv4", 00:19:02.833 "trsvcid": "$NVMF_PORT", 00:19:02.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.833 "hdgst": ${hdgst:-false}, 00:19:02.833 "ddgst": ${ddgst:-false} 00:19:02.833 }, 00:19:02.833 "method": "bdev_nvme_attach_controller" 00:19:02.833 } 00:19:02.833 EOF 00:19:02.833 )") 00:19:02.833 21:26:26 -- nvmf/common.sh@542 -- # cat 00:19:02.833 21:26:26 -- target/dif.sh@72 -- # (( file++ )) 00:19:02.833 21:26:26 -- target/dif.sh@72 -- # (( file <= files )) 00:19:02.833 21:26:26 -- nvmf/common.sh@544 -- # jq . 00:19:02.833 21:26:26 -- nvmf/common.sh@545 -- # IFS=, 00:19:02.833 21:26:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:02.833 "params": { 00:19:02.833 "name": "Nvme0", 00:19:02.833 "trtype": "tcp", 00:19:02.833 "traddr": "10.0.0.2", 00:19:02.833 "adrfam": "ipv4", 00:19:02.833 "trsvcid": "4420", 00:19:02.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:02.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:02.833 "hdgst": false, 00:19:02.833 "ddgst": false 00:19:02.833 }, 00:19:02.833 "method": "bdev_nvme_attach_controller" 00:19:02.833 },{ 00:19:02.833 "params": { 00:19:02.833 "name": "Nvme1", 00:19:02.833 "trtype": "tcp", 00:19:02.833 "traddr": "10.0.0.2", 00:19:02.833 "adrfam": "ipv4", 00:19:02.833 "trsvcid": "4420", 00:19:02.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.833 "hdgst": false, 00:19:02.833 "ddgst": false 00:19:02.833 }, 00:19:02.833 "method": "bdev_nvme_attach_controller" 00:19:02.833 }' 00:19:02.833 21:26:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:02.833 21:26:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:02.833 21:26:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.833 21:26:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.833 21:26:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:02.833 21:26:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:02.833 21:26:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:02.833 21:26:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:02.833 21:26:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:02.833 21:26:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.091 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:03.091 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:03.091 fio-3.35 00:19:03.091 Starting 2 threads 00:19:03.350 [2024-11-28 21:26:27.036909] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:03.350 [2024-11-28 21:26:27.036988] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:15.560 00:19:15.560 filename0: (groupid=0, jobs=1): err= 0: pid=86279: Thu Nov 28 21:26:37 2024 00:19:15.560 read: IOPS=5037, BW=19.7MiB/s (20.6MB/s)(197MiB/10001msec) 00:19:15.560 slat (usec): min=6, max=107, avg=13.14, stdev= 4.90 00:19:15.560 clat (usec): min=421, max=2347, avg=757.88, stdev=56.50 00:19:15.560 lat (usec): min=428, max=2373, avg=771.01, stdev=57.13 00:19:15.560 clat percentiles (usec): 00:19:15.560 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 709], 00:19:15.560 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 750], 60.00th=[ 766], 00:19:15.560 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 857], 00:19:15.560 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 930], 99.95th=[ 947], 00:19:15.560 | 99.99th=[ 1004] 00:19:15.560 bw ( KiB/s): min=19424, max=20800, per=50.07%, avg=20177.21, stdev=358.19, samples=19 00:19:15.560 iops : min= 4856, max= 5200, avg=5044.26, stdev=89.59, samples=19 00:19:15.560 lat (usec) : 500=0.01%, 750=48.85%, 1000=51.13% 00:19:15.560 lat (msec) : 2=0.01%, 4=0.01% 00:19:15.560 cpu : usr=90.58%, sys=8.07%, ctx=50, majf=0, minf=0 00:19:15.560 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.560 issued rwts: total=50380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.560 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:15.560 filename1: (groupid=0, jobs=1): err= 0: pid=86280: Thu Nov 28 21:26:37 2024 00:19:15.560 read: IOPS=5037, BW=19.7MiB/s (20.6MB/s)(197MiB/10000msec) 00:19:15.560 slat (nsec): min=6373, max=61389, avg=12998.76, stdev=4780.61 00:19:15.560 clat (usec): min=581, max=2338, avg=759.36, stdev=61.39 00:19:15.560 lat (usec): min=588, max=2364, avg=772.35, stdev=62.27 00:19:15.560 clat percentiles (usec): 00:19:15.560 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 709], 00:19:15.560 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:19:15.560 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:19:15.560 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 955], 00:19:15.560 | 99.99th=[ 1012] 00:19:15.560 bw ( KiB/s): min=19424, max=20800, per=50.07%, avg=20177.21, stdev=362.30, samples=19 00:19:15.560 iops : min= 4856, max= 5200, avg=5044.26, stdev=90.62, samples=19 00:19:15.560 lat (usec) : 750=46.62%, 1000=53.36% 00:19:15.560 lat (msec) : 2=0.01%, 4=0.01% 00:19:15.560 cpu : usr=90.60%, sys=8.12%, ctx=17, majf=0, minf=0 00:19:15.560 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.560 issued rwts: total=50376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.560 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:15.560 00:19:15.560 Run status group 0 (all jobs): 00:19:15.560 READ: bw=39.4MiB/s (41.3MB/s), 19.7MiB/s-19.7MiB/s (20.6MB/s-20.6MB/s), io=394MiB (413MB), run=10000-10001msec 00:19:15.560 21:26:37 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:15.560 21:26:37 -- target/dif.sh@43 -- # local sub 00:19:15.560 21:26:37 -- target/dif.sh@45 -- # for sub in "$@" 00:19:15.560 21:26:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:15.560 21:26:37 -- target/dif.sh@36 -- # local sub_id=0 00:19:15.560 21:26:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:15.560 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.560 21:26:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:15.560 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.560 21:26:37 -- target/dif.sh@45 -- # for sub in "$@" 00:19:15.560 21:26:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:15.560 21:26:37 -- target/dif.sh@36 -- # local sub_id=1 00:19:15.560 21:26:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.560 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.560 21:26:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:15.560 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 ************************************ 00:19:15.560 END TEST fio_dif_1_multi_subsystems 00:19:15.560 ************************************ 00:19:15.560 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.560 00:19:15.560 real 0m10.984s 00:19:15.560 user 0m18.779s 00:19:15.560 sys 0m1.862s 00:19:15.560 21:26:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 21:26:37 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:15.560 21:26:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:15.560 21:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 ************************************ 00:19:15.560 START TEST fio_dif_rand_params 00:19:15.560 ************************************ 00:19:15.560 21:26:37 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:15.560 21:26:37 -- target/dif.sh@100 -- # local NULL_DIF 00:19:15.560 21:26:37 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:15.560 21:26:37 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:15.560 21:26:37 -- target/dif.sh@103 -- # bs=128k 00:19:15.560 21:26:37 -- target/dif.sh@103 -- # numjobs=3 00:19:15.560 21:26:37 -- target/dif.sh@103 -- # iodepth=3 00:19:15.560 21:26:37 -- target/dif.sh@103 -- # runtime=5 00:19:15.560 21:26:37 -- target/dif.sh@105 -- # create_subsystems 0 00:19:15.560 21:26:37 -- target/dif.sh@28 -- # local sub 00:19:15.560 21:26:37 -- target/dif.sh@30 -- # for sub in "$@" 00:19:15.560 21:26:37 -- target/dif.sh@31 -- # create_subsystem 0 00:19:15.560 21:26:37 -- target/dif.sh@18 -- # local sub_id=0 00:19:15.560 21:26:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:15.560 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.560 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 bdev_null0 00:19:15.561 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.561 21:26:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:15.561 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.561 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.561 21:26:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:15.561 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.561 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.561 21:26:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:15.561 21:26:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.561 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 [2024-11-28 21:26:37.431252] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.561 21:26:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.561 21:26:37 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:15.561 21:26:37 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:15.561 21:26:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:15.561 21:26:37 -- nvmf/common.sh@520 -- # config=() 00:19:15.561 21:26:37 -- nvmf/common.sh@520 -- # local subsystem config 00:19:15.561 21:26:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:15.561 21:26:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:15.561 21:26:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:15.561 { 00:19:15.561 "params": { 00:19:15.561 "name": "Nvme$subsystem", 00:19:15.561 "trtype": "$TEST_TRANSPORT", 00:19:15.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.561 "adrfam": "ipv4", 00:19:15.561 "trsvcid": "$NVMF_PORT", 00:19:15.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.561 "hdgst": ${hdgst:-false}, 00:19:15.561 "ddgst": ${ddgst:-false} 00:19:15.561 }, 00:19:15.561 "method": "bdev_nvme_attach_controller" 00:19:15.561 } 00:19:15.561 EOF 00:19:15.561 )") 00:19:15.561 21:26:37 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:15.561 21:26:37 -- target/dif.sh@82 -- # gen_fio_conf 00:19:15.561 21:26:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:15.561 21:26:37 -- target/dif.sh@54 -- # local file 00:19:15.561 21:26:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:15.561 21:26:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:15.561 21:26:37 -- target/dif.sh@56 -- # cat 00:19:15.561 21:26:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.561 21:26:37 -- common/autotest_common.sh@1330 -- # shift 00:19:15.561 21:26:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:15.561 21:26:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.561 21:26:37 -- nvmf/common.sh@542 -- # cat 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.561 21:26:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:15.561 21:26:37 -- target/dif.sh@72 -- # (( file <= files )) 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:15.561 21:26:37 -- nvmf/common.sh@544 -- # jq . 00:19:15.561 21:26:37 -- nvmf/common.sh@545 -- # IFS=, 00:19:15.561 21:26:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:15.561 "params": { 00:19:15.561 "name": "Nvme0", 00:19:15.561 "trtype": "tcp", 00:19:15.561 "traddr": "10.0.0.2", 00:19:15.561 "adrfam": "ipv4", 00:19:15.561 "trsvcid": "4420", 00:19:15.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:15.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:15.561 "hdgst": false, 00:19:15.561 "ddgst": false 00:19:15.561 }, 00:19:15.561 "method": "bdev_nvme_attach_controller" 00:19:15.561 }' 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:15.561 21:26:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:15.561 21:26:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:15.561 21:26:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:15.561 21:26:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:15.561 21:26:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:15.561 21:26:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:15.561 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:15.561 ... 00:19:15.561 fio-3.35 00:19:15.561 Starting 3 threads 00:19:15.561 [2024-11-28 21:26:37.963541] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:15.561 [2024-11-28 21:26:37.963618] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:19.749 00:19:19.749 filename0: (groupid=0, jobs=1): err= 0: pid=86436: Thu Nov 28 21:26:43 2024 00:19:19.749 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(170MiB/5001msec) 00:19:19.749 slat (nsec): min=6597, max=58357, avg=14330.04, stdev=5851.64 00:19:19.749 clat (usec): min=8160, max=12678, avg=11031.51, stdev=520.54 00:19:19.749 lat (usec): min=8168, max=12705, avg=11045.84, stdev=521.30 00:19:19.749 clat percentiles (usec): 00:19:19.749 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 00:19:19.749 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:19.749 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:19:19.749 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12649], 99.95th=[12649], 00:19:19.749 | 99.99th=[12649] 00:19:19.749 bw ( KiB/s): min=33024, max=36864, per=33.29%, avg=34645.33, stdev=1047.73, samples=9 00:19:19.749 iops : min= 258, max= 288, avg=270.67, stdev= 8.19, samples=9 00:19:19.749 lat (msec) : 10=0.22%, 20=99.78% 00:19:19.749 cpu : usr=91.64%, sys=7.78%, ctx=8, majf=0, minf=0 00:19:19.749 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.749 issued rwts: total=1356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:19.749 filename0: (groupid=0, jobs=1): err= 0: pid=86437: Thu Nov 28 21:26:43 2024 00:19:19.749 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(170MiB/5007msec) 00:19:19.749 slat (nsec): min=6942, max=61827, avg=16294.24, stdev=5985.91 00:19:19.749 clat (usec): min=8163, max=12652, avg=11015.71, stdev=529.98 00:19:19.749 lat (usec): min=8177, max=12696, avg=11032.00, stdev=530.73 00:19:19.749 clat percentiles (usec): 00:19:19.749 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 00:19:19.749 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:19.749 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[11994], 00:19:19.749 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12649], 99.95th=[12649], 00:19:19.749 | 99.99th=[12649] 00:19:19.749 bw ( KiB/s): min=33090, max=36864, per=33.36%, avg=34720.20, stdev=999.01, samples=10 00:19:19.749 iops : min= 258, max= 288, avg=271.20, stdev= 7.90, samples=10 00:19:19.749 lat (msec) : 10=0.44%, 20=99.56% 00:19:19.749 cpu : usr=91.19%, sys=8.17%, ctx=47, majf=0, minf=9 00:19:19.749 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.749 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:19.749 filename0: (groupid=0, jobs=1): err= 0: pid=86438: Thu Nov 28 21:26:43 2024 00:19:19.749 read: IOPS=270, BW=33.9MiB/s (35.5MB/s)(170MiB/5006msec) 00:19:19.749 slat (nsec): min=6853, max=58353, avg=15966.66, stdev=5739.23 00:19:19.749 clat (usec): min=8148, max=18276, avg=11037.92, stdev=623.79 00:19:19.749 lat (usec): min=8162, max=18311, avg=11053.89, stdev=624.47 00:19:19.749 clat percentiles (usec): 00:19:19.749 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 00:19:19.749 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:19.749 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[11994], 00:19:19.749 | 99.00th=[12125], 99.50th=[12256], 99.90th=[18220], 99.95th=[18220], 00:19:19.749 | 99.99th=[18220] 00:19:19.749 bw ( KiB/s): min=33024, max=36864, per=33.28%, avg=34636.80, stdev=1112.94, samples=10 00:19:19.749 iops : min= 258, max= 288, avg=270.60, stdev= 8.69, samples=10 00:19:19.749 lat (msec) : 10=0.44%, 20=99.56% 00:19:19.749 cpu : usr=91.33%, sys=8.11%, ctx=6, majf=0, minf=9 00:19:19.749 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.749 issued rwts: total=1356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:19.749 00:19:19.749 Run status group 0 (all jobs): 00:19:19.749 READ: bw=102MiB/s (107MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.6MB/s), io=509MiB (534MB), run=5001-5007msec 00:19:19.749 21:26:43 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:19.749 21:26:43 -- target/dif.sh@43 -- # local sub 00:19:19.749 21:26:43 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.749 21:26:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:19.749 21:26:43 -- target/dif.sh@36 -- # local sub_id=0 00:19:19.749 21:26:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:19.749 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:19.750 21:26:43 -- target/dif.sh@109 -- # bs=4k 00:19:19.750 21:26:43 -- target/dif.sh@109 -- # numjobs=8 00:19:19.750 21:26:43 -- target/dif.sh@109 -- # iodepth=16 00:19:19.750 21:26:43 -- target/dif.sh@109 -- # runtime= 00:19:19.750 21:26:43 -- target/dif.sh@109 -- # files=2 00:19:19.750 21:26:43 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:19.750 21:26:43 -- target/dif.sh@28 -- # local sub 00:19:19.750 21:26:43 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.750 21:26:43 -- target/dif.sh@31 -- # create_subsystem 0 00:19:19.750 21:26:43 -- target/dif.sh@18 -- # local sub_id=0 00:19:19.750 21:26:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 bdev_null0 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 [2024-11-28 21:26:43.285329] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.750 21:26:43 -- target/dif.sh@31 -- # create_subsystem 1 00:19:19.750 21:26:43 -- target/dif.sh@18 -- # local sub_id=1 00:19:19.750 21:26:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 bdev_null1 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.750 21:26:43 -- target/dif.sh@31 -- # create_subsystem 2 00:19:19.750 21:26:43 -- target/dif.sh@18 -- # local sub_id=2 00:19:19.750 21:26:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 bdev_null2 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:19.750 21:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.750 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.750 21:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.750 21:26:43 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:19.750 21:26:43 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:19.750 21:26:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:19.750 21:26:43 -- nvmf/common.sh@520 -- # config=() 00:19:19.750 21:26:43 -- nvmf/common.sh@520 -- # local subsystem config 00:19:19.750 21:26:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.750 21:26:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.750 { 00:19:19.750 "params": { 00:19:19.750 "name": "Nvme$subsystem", 00:19:19.750 "trtype": "$TEST_TRANSPORT", 00:19:19.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.750 "adrfam": "ipv4", 00:19:19.750 "trsvcid": "$NVMF_PORT", 00:19:19.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.750 "hdgst": ${hdgst:-false}, 00:19:19.750 "ddgst": ${ddgst:-false} 00:19:19.750 }, 00:19:19.750 "method": "bdev_nvme_attach_controller" 00:19:19.750 } 00:19:19.750 EOF 00:19:19.750 )") 00:19:19.750 21:26:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.750 21:26:43 -- target/dif.sh@82 -- # gen_fio_conf 00:19:19.750 21:26:43 -- target/dif.sh@54 -- # local file 00:19:19.750 21:26:43 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.750 21:26:43 -- target/dif.sh@56 -- # cat 00:19:19.750 21:26:43 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:19.750 21:26:43 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:19.750 21:26:43 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:19.750 21:26:43 -- nvmf/common.sh@542 -- # cat 00:19:19.750 21:26:43 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.750 21:26:43 -- common/autotest_common.sh@1330 -- # shift 00:19:19.750 21:26:43 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:19.750 21:26:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.750 21:26:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:19.750 21:26:43 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.750 21:26:43 -- target/dif.sh@73 -- # cat 00:19:19.750 21:26:43 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:19.750 21:26:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:19.750 21:26:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.750 21:26:43 -- target/dif.sh@72 -- # (( file++ )) 00:19:19.750 21:26:43 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.750 21:26:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.750 21:26:43 -- target/dif.sh@73 -- # cat 00:19:19.750 21:26:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.750 { 00:19:19.750 "params": { 00:19:19.750 "name": "Nvme$subsystem", 00:19:19.750 "trtype": "$TEST_TRANSPORT", 00:19:19.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.750 "adrfam": "ipv4", 00:19:19.750 "trsvcid": "$NVMF_PORT", 00:19:19.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.750 "hdgst": ${hdgst:-false}, 00:19:19.750 "ddgst": ${ddgst:-false} 00:19:19.750 }, 00:19:19.750 "method": "bdev_nvme_attach_controller" 00:19:19.750 } 00:19:19.750 EOF 00:19:19.750 )") 00:19:19.750 21:26:43 -- nvmf/common.sh@542 -- # cat 00:19:19.750 21:26:43 -- target/dif.sh@72 -- # (( file++ )) 00:19:19.750 21:26:43 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.750 21:26:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.750 21:26:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.750 { 00:19:19.750 "params": { 00:19:19.750 "name": "Nvme$subsystem", 00:19:19.750 "trtype": "$TEST_TRANSPORT", 00:19:19.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.750 "adrfam": "ipv4", 00:19:19.750 "trsvcid": "$NVMF_PORT", 00:19:19.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.750 "hdgst": ${hdgst:-false}, 00:19:19.750 "ddgst": ${ddgst:-false} 00:19:19.750 }, 00:19:19.750 "method": "bdev_nvme_attach_controller" 00:19:19.750 } 00:19:19.750 EOF 00:19:19.750 )") 00:19:19.750 21:26:43 -- nvmf/common.sh@542 -- # cat 00:19:19.750 21:26:43 -- nvmf/common.sh@544 -- # jq . 00:19:19.750 21:26:43 -- nvmf/common.sh@545 -- # IFS=, 00:19:19.750 21:26:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:19.750 "params": { 00:19:19.750 "name": "Nvme0", 00:19:19.750 "trtype": "tcp", 00:19:19.750 "traddr": "10.0.0.2", 00:19:19.750 "adrfam": "ipv4", 00:19:19.750 "trsvcid": "4420", 00:19:19.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:19.750 "hdgst": false, 00:19:19.750 "ddgst": false 00:19:19.750 }, 00:19:19.750 "method": "bdev_nvme_attach_controller" 00:19:19.750 },{ 00:19:19.750 "params": { 00:19:19.750 "name": "Nvme1", 00:19:19.750 "trtype": "tcp", 00:19:19.751 "traddr": "10.0.0.2", 00:19:19.751 "adrfam": "ipv4", 00:19:19.751 "trsvcid": "4420", 00:19:19.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.751 "hdgst": false, 00:19:19.751 "ddgst": false 00:19:19.751 }, 00:19:19.751 "method": "bdev_nvme_attach_controller" 00:19:19.751 },{ 00:19:19.751 "params": { 00:19:19.751 "name": "Nvme2", 00:19:19.751 "trtype": "tcp", 00:19:19.751 "traddr": "10.0.0.2", 00:19:19.751 "adrfam": "ipv4", 00:19:19.751 "trsvcid": "4420", 00:19:19.751 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:19.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.751 "hdgst": false, 00:19:19.751 "ddgst": false 00:19:19.751 }, 00:19:19.751 "method": "bdev_nvme_attach_controller" 00:19:19.751 }' 00:19:19.751 21:26:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:19.751 21:26:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:19.751 21:26:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.751 21:26:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.751 21:26:43 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:19.751 21:26:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:19.751 21:26:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:19.751 21:26:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:19.751 21:26:43 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:19.751 21:26:43 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.009 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:20.009 ... 00:19:20.009 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:20.009 ... 00:19:20.009 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:20.009 ... 00:19:20.009 fio-3.35 00:19:20.009 Starting 24 threads 00:19:20.574 [2024-11-28 21:26:44.033237] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:20.574 [2024-11-28 21:26:44.033979] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:30.550 00:19:30.550 filename0: (groupid=0, jobs=1): err= 0: pid=86537: Thu Nov 28 21:26:54 2024 00:19:30.550 read: IOPS=193, BW=774KiB/s (793kB/s)(7756KiB/10018msec) 00:19:30.550 slat (usec): min=4, max=8025, avg=23.94, stdev=257.21 00:19:30.550 clat (msec): min=35, max=144, avg=82.51, stdev=21.89 00:19:30.550 lat (msec): min=35, max=144, avg=82.53, stdev=21.88 00:19:30.550 clat percentiles (msec): 00:19:30.550 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:19:30.550 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 93], 00:19:30.550 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 120], 00:19:30.550 | 99.00th=[ 121], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:19:30.550 | 99.99th=[ 144] 00:19:30.550 bw ( KiB/s): min= 640, max= 1080, per=4.19%, avg=771.60, stdev=131.08, samples=20 00:19:30.550 iops : min= 160, max= 270, avg=192.90, stdev=32.77, samples=20 00:19:30.550 lat (msec) : 50=10.06%, 100=65.70%, 250=24.24% 00:19:30.550 cpu : usr=31.37%, sys=1.68%, ctx=889, majf=0, minf=9 00:19:30.550 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:30.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.550 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.550 issued rwts: total=1939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86538: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=192, BW=768KiB/s (786kB/s)(7720KiB/10052msec) 00:19:30.551 slat (usec): min=3, max=8029, avg=23.06, stdev=257.84 00:19:30.551 clat (msec): min=20, max=147, avg=83.16, stdev=22.63 00:19:30.551 lat (msec): min=20, max=147, avg=83.19, stdev=22.63 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 63], 00:19:30.551 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 93], 00:19:30.551 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 117], 00:19:30.551 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 142], 99.95th=[ 148], 00:19:30.551 | 99.99th=[ 148] 00:19:30.551 bw ( KiB/s): min= 624, max= 1040, per=4.16%, avg=765.15, stdev=133.97, samples=20 00:19:30.551 iops : min= 156, max= 260, avg=191.25, stdev=33.41, samples=20 00:19:30.551 lat (msec) : 50=9.59%, 100=61.87%, 250=28.55% 00:19:30.551 cpu : usr=42.29%, sys=2.34%, ctx=1332, majf=0, minf=9 00:19:30.551 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86539: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=189, BW=758KiB/s (776kB/s)(7620KiB/10050msec) 00:19:30.551 slat (usec): min=3, max=8021, avg=19.14, stdev=205.25 00:19:30.551 clat (msec): min=2, max=159, avg=84.27, stdev=27.43 00:19:30.551 lat (msec): min=2, max=159, avg=84.29, stdev=27.43 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 7], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 63], 00:19:30.551 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:19:30.551 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 121], 00:19:30.551 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 161], 00:19:30.551 | 99.99th=[ 161] 00:19:30.551 bw ( KiB/s): min= 512, max= 1396, per=4.10%, avg=755.00, stdev=205.63, samples=20 00:19:30.551 iops : min= 128, max= 349, avg=188.75, stdev=51.41, samples=20 00:19:30.551 lat (msec) : 4=0.84%, 10=0.84%, 20=1.57%, 50=7.09%, 100=55.28% 00:19:30.551 lat (msec) : 250=34.38% 00:19:30.551 cpu : usr=42.69%, sys=2.45%, ctx=1526, majf=0, minf=0 00:19:30.551 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86540: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=194, BW=777KiB/s (796kB/s)(7772KiB/10004msec) 00:19:30.551 slat (nsec): min=7968, max=41323, avg=14883.27, stdev=4908.22 00:19:30.551 clat (msec): min=3, max=153, avg=82.30, stdev=25.12 00:19:30.551 lat (msec): min=3, max=153, avg=82.31, stdev=25.12 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:19:30.551 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 92], 00:19:30.551 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 122], 00:19:30.551 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:19:30.551 | 99.99th=[ 153] 00:19:30.551 bw ( KiB/s): min= 528, max= 1080, per=4.16%, avg=765.11, stdev=163.30, samples=19 00:19:30.551 iops : min= 132, max= 270, avg=191.26, stdev=40.82, samples=19 00:19:30.551 lat (msec) : 4=0.26%, 10=0.31%, 20=0.82%, 50=9.11%, 100=61.04% 00:19:30.551 lat (msec) : 250=28.46% 00:19:30.551 cpu : usr=36.78%, sys=1.87%, ctx=1130, majf=0, minf=9 00:19:30.551 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=1943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86541: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=187, BW=751KiB/s (770kB/s)(7536KiB/10028msec) 00:19:30.551 slat (nsec): min=4403, max=43737, avg=14404.71, stdev=4588.07 00:19:30.551 clat (msec): min=35, max=144, avg=85.06, stdev=21.72 00:19:30.551 lat (msec): min=35, max=144, avg=85.07, stdev=21.72 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 66], 00:19:30.551 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 96], 00:19:30.551 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 121], 00:19:30.551 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 146], 00:19:30.551 | 99.99th=[ 146] 00:19:30.551 bw ( KiB/s): min= 600, max= 968, per=4.06%, avg=747.20, stdev=117.30, samples=20 00:19:30.551 iops : min= 150, max= 242, avg=186.80, stdev=29.33, samples=20 00:19:30.551 lat (msec) : 50=5.41%, 100=65.07%, 250=29.51% 00:19:30.551 cpu : usr=31.48%, sys=1.69%, ctx=930, majf=0, minf=9 00:19:30.551 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86542: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=193, BW=773KiB/s (792kB/s)(7768KiB/10045msec) 00:19:30.551 slat (usec): min=7, max=3634, avg=16.47, stdev=82.31 00:19:30.551 clat (msec): min=11, max=143, avg=82.54, stdev=24.35 00:19:30.551 lat (msec): min=11, max=145, avg=82.56, stdev=24.35 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 14], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 63], 00:19:30.551 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 93], 00:19:30.551 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 117], 00:19:30.551 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:19:30.551 | 99.99th=[ 144] 00:19:30.551 bw ( KiB/s): min= 616, max= 1298, per=4.20%, avg=773.30, stdev=173.56, samples=20 00:19:30.551 iops : min= 154, max= 324, avg=193.30, stdev=43.31, samples=20 00:19:30.551 lat (msec) : 20=1.65%, 50=9.22%, 100=60.40%, 250=28.73% 00:19:30.551 cpu : usr=42.80%, sys=2.37%, ctx=1245, majf=0, minf=9 00:19:30.551 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.1%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86543: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=184, BW=740KiB/s (757kB/s)(7412KiB/10022msec) 00:19:30.551 slat (usec): min=4, max=10028, avg=19.10, stdev=232.70 00:19:30.551 clat (msec): min=21, max=144, avg=86.37, stdev=23.12 00:19:30.551 lat (msec): min=21, max=144, avg=86.39, stdev=23.11 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 65], 00:19:30.551 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 86], 60.00th=[ 96], 00:19:30.551 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:19:30.551 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 146], 99.95th=[ 146], 00:19:30.551 | 99.99th=[ 146] 00:19:30.551 bw ( KiB/s): min= 512, max= 1024, per=3.99%, avg=734.37, stdev=151.81, samples=19 00:19:30.551 iops : min= 128, max= 256, avg=183.58, stdev=37.94, samples=19 00:19:30.551 lat (msec) : 50=6.91%, 100=59.53%, 250=33.57% 00:19:30.551 cpu : usr=37.61%, sys=2.12%, ctx=1190, majf=0, minf=9 00:19:30.551 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=76.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=88.9%, 8=9.8%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=1853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename0: (groupid=0, jobs=1): err= 0: pid=86544: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=199, BW=799KiB/s (818kB/s)(8008KiB/10023msec) 00:19:30.551 slat (usec): min=5, max=8021, avg=20.56, stdev=184.57 00:19:30.551 clat (msec): min=33, max=130, avg=79.98, stdev=21.81 00:19:30.551 lat (msec): min=33, max=130, avg=80.00, stdev=21.81 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:19:30.551 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:19:30.551 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 116], 00:19:30.551 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 131], 00:19:30.551 | 99.99th=[ 131] 00:19:30.551 bw ( KiB/s): min= 688, max= 1080, per=4.32%, avg=794.40, stdev=132.60, samples=20 00:19:30.551 iops : min= 172, max= 270, avg=198.60, stdev=33.15, samples=20 00:19:30.551 lat (msec) : 50=11.29%, 100=64.64%, 250=24.08% 00:19:30.551 cpu : usr=43.55%, sys=2.24%, ctx=1636, majf=0, minf=9 00:19:30.551 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:30.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.551 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.551 filename1: (groupid=0, jobs=1): err= 0: pid=86545: Thu Nov 28 21:26:54 2024 00:19:30.551 read: IOPS=189, BW=759KiB/s (777kB/s)(7620KiB/10046msec) 00:19:30.551 slat (usec): min=3, max=12025, avg=30.96, stdev=389.03 00:19:30.551 clat (msec): min=8, max=147, avg=84.09, stdev=23.95 00:19:30.551 lat (msec): min=8, max=147, avg=84.12, stdev=23.95 00:19:30.551 clat percentiles (msec): 00:19:30.551 | 1.00th=[ 16], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 64], 00:19:30.551 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 96], 00:19:30.551 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 120], 00:19:30.552 | 99.00th=[ 125], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 148], 00:19:30.552 | 99.99th=[ 148] 00:19:30.552 bw ( KiB/s): min= 600, max= 1138, per=4.12%, avg=758.50, stdev=158.00, samples=20 00:19:30.552 iops : min= 150, max= 284, avg=189.60, stdev=39.44, samples=20 00:19:30.552 lat (msec) : 10=0.84%, 20=0.84%, 50=6.98%, 100=61.36%, 250=29.97% 00:19:30.552 cpu : usr=36.64%, sys=1.96%, ctx=1154, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86546: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=200, BW=801KiB/s (820kB/s)(8012KiB/10004msec) 00:19:30.552 slat (usec): min=3, max=4023, avg=17.17, stdev=89.70 00:19:30.552 clat (msec): min=6, max=121, avg=79.83, stdev=22.75 00:19:30.552 lat (msec): min=6, max=121, avg=79.85, stdev=22.74 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 19], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:30.552 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:19:30.552 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 112], 00:19:30.552 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 122], 00:19:30.552 | 99.99th=[ 122] 00:19:30.552 bw ( KiB/s): min= 664, max= 1080, per=4.30%, avg=790.74, stdev=135.51, samples=19 00:19:30.552 iops : min= 166, max= 270, avg=197.68, stdev=33.88, samples=19 00:19:30.552 lat (msec) : 10=0.30%, 20=0.85%, 50=10.43%, 100=63.80%, 250=24.61% 00:19:30.552 cpu : usr=33.06%, sys=1.65%, ctx=971, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86547: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=190, BW=762KiB/s (781kB/s)(7656KiB/10041msec) 00:19:30.552 slat (usec): min=3, max=8026, avg=26.00, stdev=289.46 00:19:30.552 clat (msec): min=2, max=151, avg=83.69, stdev=24.67 00:19:30.552 lat (msec): min=2, max=151, avg=83.72, stdev=24.67 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 11], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:19:30.552 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 95], 00:19:30.552 | 70.00th=[ 102], 80.00th=[ 107], 90.00th=[ 113], 95.00th=[ 118], 00:19:30.552 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:19:30.552 | 99.99th=[ 153] 00:19:30.552 bw ( KiB/s): min= 632, max= 1282, per=4.14%, avg=761.70, stdev=163.27, samples=20 00:19:30.552 iops : min= 158, max= 320, avg=190.40, stdev=40.73, samples=20 00:19:30.552 lat (msec) : 4=0.84%, 20=1.57%, 50=4.65%, 100=61.23%, 250=31.71% 00:19:30.552 cpu : usr=41.79%, sys=2.09%, ctx=1189, majf=0, minf=0 00:19:30.552 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=78.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86548: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=193, BW=772KiB/s (791kB/s)(7724KiB/10004msec) 00:19:30.552 slat (usec): min=7, max=4020, avg=16.61, stdev=91.29 00:19:30.552 clat (msec): min=6, max=144, avg=82.79, stdev=24.23 00:19:30.552 lat (msec): min=6, max=144, avg=82.81, stdev=24.23 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 18], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:19:30.552 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 91], 00:19:30.552 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 121], 00:19:30.552 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:19:30.552 | 99.99th=[ 144] 00:19:30.552 bw ( KiB/s): min= 512, max= 1080, per=4.14%, avg=761.68, stdev=160.88, samples=19 00:19:30.552 iops : min= 128, max= 270, avg=190.42, stdev=40.22, samples=19 00:19:30.552 lat (msec) : 10=0.16%, 20=0.88%, 50=9.43%, 100=59.55%, 250=29.98% 00:19:30.552 cpu : usr=35.80%, sys=1.81%, ctx=1132, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=88.3%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86549: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=194, BW=777KiB/s (796kB/s)(7772KiB/10003msec) 00:19:30.552 slat (usec): min=3, max=8029, avg=25.43, stdev=235.45 00:19:30.552 clat (msec): min=3, max=127, avg=82.23, stdev=23.06 00:19:30.552 lat (msec): min=3, max=127, avg=82.26, stdev=23.06 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:19:30.552 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 95], 00:19:30.552 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 115], 00:19:30.552 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 128], 00:19:30.552 | 99.99th=[ 128] 00:19:30.552 bw ( KiB/s): min= 528, max= 1072, per=4.14%, avg=762.95, stdev=140.10, samples=19 00:19:30.552 iops : min= 132, max= 268, avg=190.74, stdev=35.03, samples=19 00:19:30.552 lat (msec) : 4=0.31%, 10=0.36%, 20=0.77%, 50=9.32%, 100=59.60% 00:19:30.552 lat (msec) : 250=29.64% 00:19:30.552 cpu : usr=41.19%, sys=2.34%, ctx=1382, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=1.5%, 4=6.3%, 8=77.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=88.5%, 8=10.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86550: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=190, BW=762KiB/s (780kB/s)(7628KiB/10013msec) 00:19:30.552 slat (usec): min=4, max=8032, avg=27.53, stdev=317.70 00:19:30.552 clat (msec): min=13, max=144, avg=83.89, stdev=21.70 00:19:30.552 lat (msec): min=13, max=144, avg=83.92, stdev=21.71 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 63], 00:19:30.552 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 95], 00:19:30.552 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 120], 00:19:30.552 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 144], 00:19:30.552 | 99.99th=[ 144] 00:19:30.552 bw ( KiB/s): min= 632, max= 992, per=4.10%, avg=755.47, stdev=112.09, samples=19 00:19:30.552 iops : min= 158, max= 248, avg=188.84, stdev=28.03, samples=19 00:19:30.552 lat (msec) : 20=0.52%, 50=5.56%, 100=67.75%, 250=26.17% 00:19:30.552 cpu : usr=31.58%, sys=1.57%, ctx=881, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86551: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=197, BW=788KiB/s (807kB/s)(7896KiB/10015msec) 00:19:30.552 slat (usec): min=4, max=8026, avg=29.03, stdev=324.88 00:19:30.552 clat (msec): min=18, max=143, avg=81.03, stdev=21.93 00:19:30.552 lat (msec): min=18, max=143, avg=81.05, stdev=21.93 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 63], 00:19:30.552 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 85], 00:19:30.552 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 118], 00:19:30.552 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 144], 00:19:30.552 | 99.99th=[ 144] 00:19:30.552 bw ( KiB/s): min= 664, max= 1072, per=4.27%, avg=785.68, stdev=127.49, samples=19 00:19:30.552 iops : min= 166, max= 268, avg=196.42, stdev=31.87, samples=19 00:19:30.552 lat (msec) : 20=0.61%, 50=8.87%, 100=66.77%, 250=23.76% 00:19:30.552 cpu : usr=35.14%, sys=2.02%, ctx=1012, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.552 filename1: (groupid=0, jobs=1): err= 0: pid=86552: Thu Nov 28 21:26:54 2024 00:19:30.552 read: IOPS=193, BW=774KiB/s (793kB/s)(7772KiB/10042msec) 00:19:30.552 slat (usec): min=8, max=8024, avg=24.63, stdev=262.62 00:19:30.552 clat (msec): min=25, max=147, avg=82.52, stdev=22.00 00:19:30.552 lat (msec): min=25, max=147, avg=82.54, stdev=22.00 00:19:30.552 clat percentiles (msec): 00:19:30.552 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:19:30.552 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 91], 00:19:30.552 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 118], 00:19:30.552 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 148], 99.95th=[ 148], 00:19:30.552 | 99.99th=[ 148] 00:19:30.552 bw ( KiB/s): min= 632, max= 1016, per=4.19%, avg=770.80, stdev=123.21, samples=20 00:19:30.552 iops : min= 158, max= 254, avg=192.70, stdev=30.80, samples=20 00:19:30.552 lat (msec) : 50=7.93%, 100=66.13%, 250=25.94% 00:19:30.552 cpu : usr=36.47%, sys=2.01%, ctx=1114, majf=0, minf=9 00:19:30.552 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.552 issued rwts: total=1943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86553: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=201, BW=804KiB/s (823kB/s)(8048KiB/10009msec) 00:19:30.553 slat (usec): min=4, max=8027, avg=21.53, stdev=199.80 00:19:30.553 clat (msec): min=12, max=137, avg=79.48, stdev=22.28 00:19:30.553 lat (msec): min=12, max=137, avg=79.51, stdev=22.28 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:30.553 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:19:30.553 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 115], 00:19:30.553 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 125], 00:19:30.553 | 99.99th=[ 138] 00:19:30.553 bw ( KiB/s): min= 664, max= 1080, per=4.33%, avg=796.63, stdev=135.80, samples=19 00:19:30.553 iops : min= 166, max= 270, avg=199.16, stdev=33.95, samples=19 00:19:30.553 lat (msec) : 20=0.84%, 50=12.03%, 100=64.91%, 250=22.22% 00:19:30.553 cpu : usr=38.29%, sys=2.15%, ctx=1114, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86554: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=191, BW=765KiB/s (784kB/s)(7672KiB/10023msec) 00:19:30.553 slat (usec): min=5, max=8028, avg=21.93, stdev=214.55 00:19:30.553 clat (msec): min=31, max=139, avg=83.46, stdev=20.80 00:19:30.553 lat (msec): min=31, max=139, avg=83.48, stdev=20.79 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 65], 00:19:30.553 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 94], 00:19:30.553 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 116], 00:19:30.553 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 140], 99.95th=[ 140], 00:19:30.553 | 99.99th=[ 140] 00:19:30.553 bw ( KiB/s): min= 664, max= 1048, per=4.13%, avg=760.85, stdev=114.22, samples=20 00:19:30.553 iops : min= 166, max= 262, avg=190.20, stdev=28.55, samples=20 00:19:30.553 lat (msec) : 50=5.01%, 100=67.15%, 250=27.84% 00:19:30.553 cpu : usr=41.08%, sys=2.27%, ctx=1424, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=1918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86555: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=190, BW=761KiB/s (780kB/s)(7648KiB/10045msec) 00:19:30.553 slat (usec): min=5, max=8033, avg=28.81, stdev=325.79 00:19:30.553 clat (msec): min=15, max=144, avg=83.87, stdev=22.19 00:19:30.553 lat (msec): min=15, max=144, avg=83.90, stdev=22.19 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 66], 00:19:30.553 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 96], 00:19:30.553 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 121], 00:19:30.553 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:30.553 | 99.99th=[ 144] 00:19:30.553 bw ( KiB/s): min= 600, max= 1048, per=4.12%, avg=758.00, stdev=127.20, samples=20 00:19:30.553 iops : min= 150, max= 262, avg=189.50, stdev=31.80, samples=20 00:19:30.553 lat (msec) : 20=0.73%, 50=6.49%, 100=64.33%, 250=28.45% 00:19:30.553 cpu : usr=31.41%, sys=1.71%, ctx=890, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86556: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=182, BW=729KiB/s (746kB/s)(7316KiB/10040msec) 00:19:30.553 slat (usec): min=4, max=2027, avg=15.70, stdev=47.33 00:19:30.553 clat (msec): min=11, max=155, avg=87.64, stdev=24.57 00:19:30.553 lat (msec): min=11, max=155, avg=87.66, stdev=24.57 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 20], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 68], 00:19:30.553 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 96], 00:19:30.553 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:19:30.553 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:19:30.553 | 99.99th=[ 157] 00:19:30.553 bw ( KiB/s): min= 608, max= 1024, per=3.95%, avg=727.60, stdev=157.04, samples=20 00:19:30.553 iops : min= 152, max= 256, avg=181.90, stdev=39.26, samples=20 00:19:30.553 lat (msec) : 20=1.53%, 50=6.67%, 100=55.60%, 250=36.19% 00:19:30.553 cpu : usr=35.66%, sys=2.07%, ctx=1299, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=1.1%, 4=4.7%, 8=77.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=89.1%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86557: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=191, BW=768KiB/s (786kB/s)(7696KiB/10027msec) 00:19:30.553 slat (usec): min=4, max=8026, avg=33.47, stdev=376.14 00:19:30.553 clat (msec): min=35, max=134, avg=83.19, stdev=21.23 00:19:30.553 lat (msec): min=35, max=134, avg=83.22, stdev=21.23 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 65], 00:19:30.553 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 92], 00:19:30.553 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 116], 00:19:30.553 | 99.00th=[ 121], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:19:30.553 | 99.99th=[ 136] 00:19:30.553 bw ( KiB/s): min= 640, max= 1016, per=4.15%, avg=763.20, stdev=118.41, samples=20 00:19:30.553 iops : min= 160, max= 254, avg=190.80, stdev=29.60, samples=20 00:19:30.553 lat (msec) : 50=7.22%, 100=64.97%, 250=27.81% 00:19:30.553 cpu : usr=37.02%, sys=1.90%, ctx=1101, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86558: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=182, BW=730KiB/s (747kB/s)(7328KiB/10043msec) 00:19:30.553 slat (usec): min=4, max=8032, avg=27.35, stdev=296.04 00:19:30.553 clat (msec): min=27, max=156, avg=87.43, stdev=22.19 00:19:30.553 lat (msec): min=27, max=156, avg=87.46, stdev=22.18 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 40], 5.00th=[ 49], 10.00th=[ 62], 20.00th=[ 69], 00:19:30.553 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 89], 60.00th=[ 96], 00:19:30.553 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 121], 00:19:30.553 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 157], 99.95th=[ 157], 00:19:30.553 | 99.99th=[ 157] 00:19:30.553 bw ( KiB/s): min= 528, max= 1000, per=3.96%, avg=729.25, stdev=126.41, samples=20 00:19:30.553 iops : min= 132, max= 250, avg=182.30, stdev=31.58, samples=20 00:19:30.553 lat (msec) : 50=6.00%, 100=60.81%, 250=33.19% 00:19:30.553 cpu : usr=38.43%, sys=2.20%, ctx=1171, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=77.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86559: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=189, BW=758KiB/s (776kB/s)(7616KiB/10049msec) 00:19:30.553 slat (usec): min=3, max=9026, avg=23.18, stdev=277.75 00:19:30.553 clat (msec): min=8, max=147, avg=84.29, stdev=24.45 00:19:30.553 lat (msec): min=8, max=147, avg=84.32, stdev=24.44 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 9], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 64], 00:19:30.553 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 96], 00:19:30.553 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 118], 00:19:30.553 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 148], 00:19:30.553 | 99.99th=[ 148] 00:19:30.553 bw ( KiB/s): min= 608, max= 1309, per=4.10%, avg=754.65, stdev=172.54, samples=20 00:19:30.553 iops : min= 152, max= 327, avg=188.65, stdev=43.09, samples=20 00:19:30.553 lat (msec) : 10=1.58%, 20=0.95%, 50=4.57%, 100=61.92%, 250=30.99% 00:19:30.553 cpu : usr=37.32%, sys=2.00%, ctx=1070, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=80.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:30.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.553 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.553 filename2: (groupid=0, jobs=1): err= 0: pid=86560: Thu Nov 28 21:26:54 2024 00:19:30.553 read: IOPS=196, BW=787KiB/s (805kB/s)(7884KiB/10024msec) 00:19:30.553 slat (usec): min=3, max=12027, avg=29.04, stdev=349.21 00:19:30.553 clat (msec): min=35, max=144, avg=81.17, stdev=21.31 00:19:30.553 lat (msec): min=35, max=144, avg=81.20, stdev=21.30 00:19:30.553 clat percentiles (msec): 00:19:30.553 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 63], 00:19:30.553 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:19:30.553 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 116], 00:19:30.553 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 144], 00:19:30.553 | 99.99th=[ 144] 00:19:30.553 bw ( KiB/s): min= 664, max= 1056, per=4.25%, avg=782.05, stdev=124.89, samples=20 00:19:30.553 iops : min= 166, max= 264, avg=195.50, stdev=31.23, samples=20 00:19:30.553 lat (msec) : 50=10.05%, 100=63.98%, 250=25.98% 00:19:30.553 cpu : usr=38.94%, sys=2.08%, ctx=1150, majf=0, minf=9 00:19:30.553 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:30.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.554 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.554 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:30.554 00:19:30.554 Run status group 0 (all jobs): 00:19:30.554 READ: bw=18.0MiB/s (18.8MB/s), 729KiB/s-804KiB/s (746kB/s-823kB/s), io=181MiB (189MB), run=10003-10052msec 00:19:30.814 21:26:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:30.814 21:26:54 -- target/dif.sh@43 -- # local sub 00:19:30.814 21:26:54 -- target/dif.sh@45 -- # for sub in "$@" 00:19:30.814 21:26:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:30.814 21:26:54 -- target/dif.sh@36 -- # local sub_id=0 00:19:30.814 21:26:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@45 -- # for sub in "$@" 00:19:30.814 21:26:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:30.814 21:26:54 -- target/dif.sh@36 -- # local sub_id=1 00:19:30.814 21:26:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@45 -- # for sub in "$@" 00:19:30.814 21:26:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:30.814 21:26:54 -- target/dif.sh@36 -- # local sub_id=2 00:19:30.814 21:26:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:30.814 21:26:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:30.814 21:26:54 -- target/dif.sh@115 -- # numjobs=2 00:19:30.814 21:26:54 -- target/dif.sh@115 -- # iodepth=8 00:19:30.814 21:26:54 -- target/dif.sh@115 -- # runtime=5 00:19:30.814 21:26:54 -- target/dif.sh@115 -- # files=1 00:19:30.814 21:26:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:30.814 21:26:54 -- target/dif.sh@28 -- # local sub 00:19:30.814 21:26:54 -- target/dif.sh@30 -- # for sub in "$@" 00:19:30.814 21:26:54 -- target/dif.sh@31 -- # create_subsystem 0 00:19:30.814 21:26:54 -- target/dif.sh@18 -- # local sub_id=0 00:19:30.814 21:26:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 bdev_null0 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 [2024-11-28 21:26:54.505609] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@30 -- # for sub in "$@" 00:19:30.814 21:26:54 -- target/dif.sh@31 -- # create_subsystem 1 00:19:30.814 21:26:54 -- target/dif.sh@18 -- # local sub_id=1 00:19:30.814 21:26:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 bdev_null1 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:30.814 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.814 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.814 21:26:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:30.815 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.815 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.815 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.815 21:26:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.815 21:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.815 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.815 21:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.815 21:26:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:30.815 21:26:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:30.815 21:26:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:30.815 21:26:54 -- nvmf/common.sh@520 -- # config=() 00:19:30.815 21:26:54 -- nvmf/common.sh@520 -- # local subsystem config 00:19:30.815 21:26:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.815 21:26:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.815 { 00:19:30.815 "params": { 00:19:30.815 "name": "Nvme$subsystem", 00:19:30.815 "trtype": "$TEST_TRANSPORT", 00:19:30.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.815 "adrfam": "ipv4", 00:19:30.815 "trsvcid": "$NVMF_PORT", 00:19:30.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.815 "hdgst": ${hdgst:-false}, 00:19:30.815 "ddgst": ${ddgst:-false} 00:19:30.815 }, 00:19:30.815 "method": "bdev_nvme_attach_controller" 00:19:30.815 } 00:19:30.815 EOF 00:19:30.815 )") 00:19:30.815 21:26:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:30.815 21:26:54 -- target/dif.sh@82 -- # gen_fio_conf 00:19:30.815 21:26:54 -- target/dif.sh@54 -- # local file 00:19:30.815 21:26:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:30.815 21:26:54 -- target/dif.sh@56 -- # cat 00:19:30.815 21:26:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:30.815 21:26:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:30.815 21:26:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:30.815 21:26:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.815 21:26:54 -- nvmf/common.sh@542 -- # cat 00:19:30.815 21:26:54 -- common/autotest_common.sh@1330 -- # shift 00:19:30.815 21:26:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:30.815 21:26:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:30.815 21:26:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:30.815 21:26:54 -- target/dif.sh@72 -- # (( file <= files )) 00:19:30.815 21:26:54 -- target/dif.sh@73 -- # cat 00:19:30.815 21:26:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.815 21:26:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:30.815 21:26:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:30.815 21:26:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.815 21:26:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.815 { 00:19:30.815 "params": { 00:19:30.815 "name": "Nvme$subsystem", 00:19:30.815 "trtype": "$TEST_TRANSPORT", 00:19:30.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.815 "adrfam": "ipv4", 00:19:30.815 "trsvcid": "$NVMF_PORT", 00:19:30.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.815 "hdgst": ${hdgst:-false}, 00:19:30.815 "ddgst": ${ddgst:-false} 00:19:30.815 }, 00:19:30.815 "method": "bdev_nvme_attach_controller" 00:19:30.815 } 00:19:30.815 EOF 00:19:30.815 )") 00:19:30.815 21:26:54 -- target/dif.sh@72 -- # (( file++ )) 00:19:30.815 21:26:54 -- target/dif.sh@72 -- # (( file <= files )) 00:19:30.815 21:26:54 -- nvmf/common.sh@542 -- # cat 00:19:31.075 21:26:54 -- nvmf/common.sh@544 -- # jq . 00:19:31.075 21:26:54 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.075 21:26:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.075 "params": { 00:19:31.075 "name": "Nvme0", 00:19:31.075 "trtype": "tcp", 00:19:31.075 "traddr": "10.0.0.2", 00:19:31.075 "adrfam": "ipv4", 00:19:31.075 "trsvcid": "4420", 00:19:31.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:31.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:31.075 "hdgst": false, 00:19:31.075 "ddgst": false 00:19:31.075 }, 00:19:31.075 "method": "bdev_nvme_attach_controller" 00:19:31.075 },{ 00:19:31.075 "params": { 00:19:31.075 "name": "Nvme1", 00:19:31.075 "trtype": "tcp", 00:19:31.075 "traddr": "10.0.0.2", 00:19:31.075 "adrfam": "ipv4", 00:19:31.075 "trsvcid": "4420", 00:19:31.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.075 "hdgst": false, 00:19:31.075 "ddgst": false 00:19:31.075 }, 00:19:31.075 "method": "bdev_nvme_attach_controller" 00:19:31.075 }' 00:19:31.075 21:26:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:31.075 21:26:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:31.075 21:26:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.075 21:26:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.075 21:26:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:31.075 21:26:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:31.075 21:26:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:31.075 21:26:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:31.075 21:26:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.075 21:26:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.075 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:31.075 ... 00:19:31.075 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:31.075 ... 00:19:31.075 fio-3.35 00:19:31.075 Starting 4 threads 00:19:31.643 [2024-11-28 21:26:55.095719] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:31.643 [2024-11-28 21:26:55.095789] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:36.912 00:19:36.912 filename0: (groupid=0, jobs=1): err= 0: pid=86694: Thu Nov 28 21:27:00 2024 00:19:36.912 read: IOPS=1768, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5002msec) 00:19:36.912 slat (nsec): min=7088, max=76503, avg=15254.25, stdev=4629.27 00:19:36.912 clat (usec): min=2562, max=5611, avg=4461.39, stdev=177.80 00:19:36.912 lat (usec): min=2605, max=5625, avg=4476.64, stdev=178.00 00:19:36.912 clat percentiles (usec): 00:19:36.912 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4359], 00:19:36.912 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:19:36.912 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:19:36.912 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5407], 00:19:36.912 | 99.99th=[ 5604] 00:19:36.912 bw ( KiB/s): min=13824, max=14336, per=21.15%, avg=14154.22, stdev=159.74, samples=9 00:19:36.912 iops : min= 1728, max= 1792, avg=1769.22, stdev=19.94, samples=9 00:19:36.912 lat (msec) : 4=1.10%, 10=98.90% 00:19:36.912 cpu : usr=91.94%, sys=7.20%, ctx=629, majf=0, minf=9 00:19:36.912 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.912 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.912 issued rwts: total=8848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.912 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.912 filename0: (groupid=0, jobs=1): err= 0: pid=86695: Thu Nov 28 21:27:00 2024 00:19:36.912 read: IOPS=2395, BW=18.7MiB/s (19.6MB/s)(93.6MiB/5004msec) 00:19:36.912 slat (nsec): min=6956, max=52511, avg=10792.15, stdev=4309.52 00:19:36.912 clat (usec): min=1004, max=7967, avg=3312.22, stdev=1002.16 00:19:36.912 lat (usec): min=1012, max=7990, avg=3323.01, stdev=1002.06 00:19:36.912 clat percentiles (usec): 00:19:36.912 | 1.00th=[ 1991], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2245], 00:19:36.912 | 30.00th=[ 2409], 40.00th=[ 2540], 50.00th=[ 2999], 60.00th=[ 4113], 00:19:36.912 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:19:36.912 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 7767], 00:19:36.912 | 99.99th=[ 7767] 00:19:36.912 bw ( KiB/s): min=16656, max=19712, per=28.64%, avg=19168.00, stdev=907.42, samples=10 00:19:36.912 iops : min= 2082, max= 2464, avg=2396.00, stdev=113.43, samples=10 00:19:36.912 lat (msec) : 2=1.14%, 4=55.17%, 10=43.69% 00:19:36.912 cpu : usr=91.43%, sys=7.64%, ctx=18, majf=0, minf=9 00:19:36.912 IO depths : 1=0.1%, 2=1.0%, 4=63.1%, 8=35.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.912 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.912 issued rwts: total=11987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.912 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.912 filename1: (groupid=0, jobs=1): err= 0: pid=86696: Thu Nov 28 21:27:00 2024 00:19:36.912 read: IOPS=2435, BW=19.0MiB/s (20.0MB/s)(95.2MiB/5001msec) 00:19:36.912 slat (nsec): min=7082, max=54955, avg=11518.23, stdev=4996.64 00:19:36.912 clat (usec): min=1014, max=7390, avg=3255.06, stdev=1009.09 00:19:36.912 lat (usec): min=1023, max=7416, avg=3266.58, stdev=1008.00 00:19:36.912 clat percentiles (usec): 00:19:36.912 | 1.00th=[ 1237], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2180], 00:19:36.912 | 30.00th=[ 2409], 40.00th=[ 2573], 50.00th=[ 2966], 60.00th=[ 4047], 00:19:36.912 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:19:36.912 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 4817], 99.95th=[ 7177], 00:19:36.912 | 99.99th=[ 7242] 00:19:36.912 bw ( KiB/s): min=19120, max=19888, per=29.15%, avg=19511.11, stdev=231.32, samples=9 00:19:36.912 iops : min= 2390, max= 2486, avg=2438.89, stdev=28.92, samples=9 00:19:36.912 lat (msec) : 2=3.09%, 4=55.73%, 10=41.18% 00:19:36.912 cpu : usr=91.28%, sys=7.68%, ctx=6, majf=0, minf=9 00:19:36.912 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.912 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.912 issued rwts: total=12182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.913 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.913 filename1: (groupid=0, jobs=1): err= 0: pid=86697: Thu Nov 28 21:27:00 2024 00:19:36.913 read: IOPS=1768, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5002msec) 00:19:36.913 slat (nsec): min=7600, max=56565, avg=14977.47, stdev=4356.05 00:19:36.913 clat (usec): min=2553, max=5923, avg=4463.35, stdev=181.10 00:19:36.913 lat (usec): min=2596, max=5947, avg=4478.33, stdev=181.15 00:19:36.913 clat percentiles (usec): 00:19:36.913 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4359], 00:19:36.913 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:19:36.913 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:19:36.913 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 5735], 00:19:36.913 | 99.99th=[ 5932] 00:19:36.913 bw ( KiB/s): min=13824, max=14336, per=21.14%, avg=14151.11, stdev=158.21, samples=9 00:19:36.913 iops : min= 1728, max= 1792, avg=1768.89, stdev=19.78, samples=9 00:19:36.913 lat (msec) : 4=1.07%, 10=98.93% 00:19:36.913 cpu : usr=92.28%, sys=6.98%, ctx=51, majf=0, minf=9 00:19:36.913 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.913 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.913 issued rwts: total=8848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.913 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.913 00:19:36.913 Run status group 0 (all jobs): 00:19:36.913 READ: bw=65.4MiB/s (68.5MB/s), 13.8MiB/s-19.0MiB/s (14.5MB/s-20.0MB/s), io=327MiB (343MB), run=5001-5004msec 00:19:36.913 21:27:00 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:36.913 21:27:00 -- target/dif.sh@43 -- # local sub 00:19:36.913 21:27:00 -- target/dif.sh@45 -- # for sub in "$@" 00:19:36.913 21:27:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:36.913 21:27:00 -- target/dif.sh@36 -- # local sub_id=0 00:19:36.913 21:27:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@45 -- # for sub in "$@" 00:19:36.913 21:27:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:36.913 21:27:00 -- target/dif.sh@36 -- # local sub_id=1 00:19:36.913 21:27:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 00:19:36.913 real 0m23.000s 00:19:36.913 user 2m4.136s 00:19:36.913 sys 0m8.221s 00:19:36.913 ************************************ 00:19:36.913 END TEST fio_dif_rand_params 00:19:36.913 ************************************ 00:19:36.913 21:27:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:36.913 21:27:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:36.913 21:27:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 ************************************ 00:19:36.913 START TEST fio_dif_digest 00:19:36.913 ************************************ 00:19:36.913 21:27:00 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:36.913 21:27:00 -- target/dif.sh@123 -- # local NULL_DIF 00:19:36.913 21:27:00 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:36.913 21:27:00 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:36.913 21:27:00 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:36.913 21:27:00 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:36.913 21:27:00 -- target/dif.sh@127 -- # numjobs=3 00:19:36.913 21:27:00 -- target/dif.sh@127 -- # iodepth=3 00:19:36.913 21:27:00 -- target/dif.sh@127 -- # runtime=10 00:19:36.913 21:27:00 -- target/dif.sh@128 -- # hdgst=true 00:19:36.913 21:27:00 -- target/dif.sh@128 -- # ddgst=true 00:19:36.913 21:27:00 -- target/dif.sh@130 -- # create_subsystems 0 00:19:36.913 21:27:00 -- target/dif.sh@28 -- # local sub 00:19:36.913 21:27:00 -- target/dif.sh@30 -- # for sub in "$@" 00:19:36.913 21:27:00 -- target/dif.sh@31 -- # create_subsystem 0 00:19:36.913 21:27:00 -- target/dif.sh@18 -- # local sub_id=0 00:19:36.913 21:27:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 bdev_null0 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:36.913 21:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.913 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:19:36.913 [2024-11-28 21:27:00.483650] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.913 21:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.913 21:27:00 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:36.913 21:27:00 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:36.913 21:27:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:36.913 21:27:00 -- nvmf/common.sh@520 -- # config=() 00:19:36.913 21:27:00 -- nvmf/common.sh@520 -- # local subsystem config 00:19:36.913 21:27:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:36.913 21:27:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:36.913 21:27:00 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:36.913 21:27:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:36.913 { 00:19:36.913 "params": { 00:19:36.913 "name": "Nvme$subsystem", 00:19:36.913 "trtype": "$TEST_TRANSPORT", 00:19:36.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.913 "adrfam": "ipv4", 00:19:36.913 "trsvcid": "$NVMF_PORT", 00:19:36.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.913 "hdgst": ${hdgst:-false}, 00:19:36.913 "ddgst": ${ddgst:-false} 00:19:36.913 }, 00:19:36.913 "method": "bdev_nvme_attach_controller" 00:19:36.913 } 00:19:36.913 EOF 00:19:36.913 )") 00:19:36.913 21:27:00 -- target/dif.sh@82 -- # gen_fio_conf 00:19:36.913 21:27:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:36.913 21:27:00 -- target/dif.sh@54 -- # local file 00:19:36.913 21:27:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:36.913 21:27:00 -- target/dif.sh@56 -- # cat 00:19:36.913 21:27:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:36.913 21:27:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.913 21:27:00 -- common/autotest_common.sh@1330 -- # shift 00:19:36.913 21:27:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:36.913 21:27:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:36.913 21:27:00 -- nvmf/common.sh@542 -- # cat 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.913 21:27:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:36.913 21:27:00 -- target/dif.sh@72 -- # (( file <= files )) 00:19:36.913 21:27:00 -- nvmf/common.sh@544 -- # jq . 00:19:36.913 21:27:00 -- nvmf/common.sh@545 -- # IFS=, 00:19:36.913 21:27:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:36.913 "params": { 00:19:36.913 "name": "Nvme0", 00:19:36.913 "trtype": "tcp", 00:19:36.913 "traddr": "10.0.0.2", 00:19:36.913 "adrfam": "ipv4", 00:19:36.913 "trsvcid": "4420", 00:19:36.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:36.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:36.913 "hdgst": true, 00:19:36.913 "ddgst": true 00:19:36.913 }, 00:19:36.913 "method": "bdev_nvme_attach_controller" 00:19:36.913 }' 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:36.913 21:27:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:36.913 21:27:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:36.913 21:27:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:36.913 21:27:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:36.913 21:27:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:36.913 21:27:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:37.172 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:37.172 ... 00:19:37.172 fio-3.35 00:19:37.172 Starting 3 threads 00:19:37.431 [2024-11-28 21:27:00.994901] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:37.431 [2024-11-28 21:27:00.994979] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:47.424 00:19:47.424 filename0: (groupid=0, jobs=1): err= 0: pid=86803: Thu Nov 28 21:27:11 2024 00:19:47.424 read: IOPS=233, BW=29.1MiB/s (30.6MB/s)(292MiB/10012msec) 00:19:47.424 slat (nsec): min=6895, max=59729, avg=15347.40, stdev=6278.00 00:19:47.424 clat (usec): min=11821, max=15956, avg=12835.52, stdev=591.70 00:19:47.424 lat (usec): min=11835, max=15993, avg=12850.87, stdev=592.65 00:19:47.424 clat percentiles (usec): 00:19:47.424 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:19:47.424 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:19:47.424 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:19:47.424 | 99.00th=[14222], 99.50th=[14353], 99.90th=[15926], 99.95th=[15926], 00:19:47.424 | 99.99th=[15926] 00:19:47.424 bw ( KiB/s): min=28416, max=31488, per=33.33%, avg=29836.80, stdev=798.71, samples=20 00:19:47.424 iops : min= 222, max= 246, avg=233.10, stdev= 6.24, samples=20 00:19:47.424 lat (msec) : 20=100.00% 00:19:47.424 cpu : usr=91.35%, sys=8.13%, ctx=8, majf=0, minf=9 00:19:47.424 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.424 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:47.424 filename0: (groupid=0, jobs=1): err= 0: pid=86804: Thu Nov 28 21:27:11 2024 00:19:47.424 read: IOPS=233, BW=29.1MiB/s (30.6MB/s)(292MiB/10010msec) 00:19:47.424 slat (nsec): min=7162, max=55413, avg=16639.38, stdev=5863.62 00:19:47.424 clat (usec): min=11809, max=14498, avg=12828.33, stdev=580.31 00:19:47.424 lat (usec): min=11822, max=14520, avg=12844.96, stdev=581.20 00:19:47.424 clat percentiles (usec): 00:19:47.424 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:19:47.424 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:19:47.424 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13829], 00:19:47.424 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:19:47.424 | 99.99th=[14484] 00:19:47.424 bw ( KiB/s): min=28359, max=31488, per=33.37%, avg=29868.16, stdev=769.62, samples=19 00:19:47.424 iops : min= 221, max= 246, avg=233.32, stdev= 6.07, samples=19 00:19:47.424 lat (msec) : 20=100.00% 00:19:47.424 cpu : usr=91.35%, sys=8.13%, ctx=9, majf=0, minf=11 00:19:47.424 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.424 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:47.424 filename0: (groupid=0, jobs=1): err= 0: pid=86805: Thu Nov 28 21:27:11 2024 00:19:47.424 read: IOPS=233, BW=29.1MiB/s (30.6MB/s)(292MiB/10011msec) 00:19:47.424 slat (nsec): min=7149, max=62988, avg=16662.59, stdev=6098.76 00:19:47.424 clat (usec): min=11804, max=14603, avg=12829.96, stdev=583.47 00:19:47.424 lat (usec): min=11817, max=14635, avg=12846.62, stdev=584.42 00:19:47.424 clat percentiles (usec): 00:19:47.424 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:19:47.424 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:19:47.424 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:19:47.424 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:19:47.424 | 99.99th=[14615] 00:19:47.424 bw ( KiB/s): min=28472, max=31488, per=33.33%, avg=29839.60, stdev=793.55, samples=20 00:19:47.424 iops : min= 222, max= 246, avg=233.10, stdev= 6.24, samples=20 00:19:47.424 lat (msec) : 20=100.00% 00:19:47.424 cpu : usr=91.69%, sys=7.70%, ctx=726, majf=0, minf=9 00:19:47.424 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.424 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:47.424 00:19:47.424 Run status group 0 (all jobs): 00:19:47.424 READ: bw=87.4MiB/s (91.7MB/s), 29.1MiB/s-29.1MiB/s (30.6MB/s-30.6MB/s), io=875MiB (918MB), run=10010-10012msec 00:19:47.683 21:27:11 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:47.683 21:27:11 -- target/dif.sh@43 -- # local sub 00:19:47.683 21:27:11 -- target/dif.sh@45 -- # for sub in "$@" 00:19:47.683 21:27:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:47.683 21:27:11 -- target/dif.sh@36 -- # local sub_id=0 00:19:47.683 21:27:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:47.683 21:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.683 21:27:11 -- common/autotest_common.sh@10 -- # set +x 00:19:47.683 21:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.683 21:27:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:47.683 21:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.683 21:27:11 -- common/autotest_common.sh@10 -- # set +x 00:19:47.683 21:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.683 00:19:47.683 real 0m10.839s 00:19:47.683 user 0m27.985s 00:19:47.683 sys 0m2.619s 00:19:47.683 21:27:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:47.683 ************************************ 00:19:47.683 END TEST fio_dif_digest 00:19:47.683 ************************************ 00:19:47.683 21:27:11 -- common/autotest_common.sh@10 -- # set +x 00:19:47.683 21:27:11 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:47.683 21:27:11 -- target/dif.sh@147 -- # nvmftestfini 00:19:47.683 21:27:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.683 21:27:11 -- nvmf/common.sh@116 -- # sync 00:19:47.683 21:27:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.683 21:27:11 -- nvmf/common.sh@119 -- # set +e 00:19:47.683 21:27:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.683 21:27:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.683 rmmod nvme_tcp 00:19:47.683 rmmod nvme_fabrics 00:19:47.683 rmmod nvme_keyring 00:19:47.942 21:27:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.942 21:27:11 -- nvmf/common.sh@123 -- # set -e 00:19:47.942 21:27:11 -- nvmf/common.sh@124 -- # return 0 00:19:47.942 21:27:11 -- nvmf/common.sh@477 -- # '[' -n 86053 ']' 00:19:47.942 21:27:11 -- nvmf/common.sh@478 -- # killprocess 86053 00:19:47.942 21:27:11 -- common/autotest_common.sh@936 -- # '[' -z 86053 ']' 00:19:47.942 21:27:11 -- common/autotest_common.sh@940 -- # kill -0 86053 00:19:47.942 21:27:11 -- common/autotest_common.sh@941 -- # uname 00:19:47.942 21:27:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.942 21:27:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86053 00:19:47.942 killing process with pid 86053 00:19:47.942 21:27:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.942 21:27:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.942 21:27:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86053' 00:19:47.942 21:27:11 -- common/autotest_common.sh@955 -- # kill 86053 00:19:47.942 21:27:11 -- common/autotest_common.sh@960 -- # wait 86053 00:19:47.942 21:27:11 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:47.942 21:27:11 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:48.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.460 Waiting for block devices as requested 00:19:48.460 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.460 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.460 21:27:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:48.460 21:27:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:48.460 21:27:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.460 21:27:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:48.460 21:27:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.460 21:27:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:48.460 21:27:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.719 21:27:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:48.719 ************************************ 00:19:48.719 END TEST nvmf_dif 00:19:48.719 ************************************ 00:19:48.719 00:19:48.719 real 0m58.865s 00:19:48.719 user 3m46.474s 00:19:48.719 sys 0m19.351s 00:19:48.719 21:27:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:48.719 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:19:48.719 21:27:12 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:48.719 21:27:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:48.719 21:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.719 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:19:48.719 ************************************ 00:19:48.719 START TEST nvmf_abort_qd_sizes 00:19:48.719 ************************************ 00:19:48.719 21:27:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:48.719 * Looking for test storage... 00:19:48.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:48.719 21:27:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:48.719 21:27:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:48.719 21:27:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:48.719 21:27:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:48.719 21:27:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:48.719 21:27:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:48.719 21:27:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:48.719 21:27:12 -- scripts/common.sh@335 -- # IFS=.-: 00:19:48.719 21:27:12 -- scripts/common.sh@335 -- # read -ra ver1 00:19:48.719 21:27:12 -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.719 21:27:12 -- scripts/common.sh@336 -- # read -ra ver2 00:19:48.719 21:27:12 -- scripts/common.sh@337 -- # local 'op=<' 00:19:48.719 21:27:12 -- scripts/common.sh@339 -- # ver1_l=2 00:19:48.719 21:27:12 -- scripts/common.sh@340 -- # ver2_l=1 00:19:48.719 21:27:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:48.719 21:27:12 -- scripts/common.sh@343 -- # case "$op" in 00:19:48.719 21:27:12 -- scripts/common.sh@344 -- # : 1 00:19:48.719 21:27:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:48.719 21:27:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.719 21:27:12 -- scripts/common.sh@364 -- # decimal 1 00:19:48.719 21:27:12 -- scripts/common.sh@352 -- # local d=1 00:19:48.719 21:27:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.719 21:27:12 -- scripts/common.sh@354 -- # echo 1 00:19:48.719 21:27:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:48.719 21:27:12 -- scripts/common.sh@365 -- # decimal 2 00:19:48.719 21:27:12 -- scripts/common.sh@352 -- # local d=2 00:19:48.719 21:27:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.719 21:27:12 -- scripts/common.sh@354 -- # echo 2 00:19:48.719 21:27:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:48.719 21:27:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:48.719 21:27:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:48.719 21:27:12 -- scripts/common.sh@367 -- # return 0 00:19:48.719 21:27:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.719 21:27:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:48.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.719 --rc genhtml_branch_coverage=1 00:19:48.719 --rc genhtml_function_coverage=1 00:19:48.719 --rc genhtml_legend=1 00:19:48.719 --rc geninfo_all_blocks=1 00:19:48.719 --rc geninfo_unexecuted_blocks=1 00:19:48.719 00:19:48.719 ' 00:19:48.719 21:27:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:48.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.719 --rc genhtml_branch_coverage=1 00:19:48.719 --rc genhtml_function_coverage=1 00:19:48.719 --rc genhtml_legend=1 00:19:48.719 --rc geninfo_all_blocks=1 00:19:48.719 --rc geninfo_unexecuted_blocks=1 00:19:48.719 00:19:48.719 ' 00:19:48.719 21:27:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:48.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.719 --rc genhtml_branch_coverage=1 00:19:48.719 --rc genhtml_function_coverage=1 00:19:48.719 --rc genhtml_legend=1 00:19:48.719 --rc geninfo_all_blocks=1 00:19:48.719 --rc geninfo_unexecuted_blocks=1 00:19:48.719 00:19:48.719 ' 00:19:48.719 21:27:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:48.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.719 --rc genhtml_branch_coverage=1 00:19:48.719 --rc genhtml_function_coverage=1 00:19:48.719 --rc genhtml_legend=1 00:19:48.719 --rc geninfo_all_blocks=1 00:19:48.719 --rc geninfo_unexecuted_blocks=1 00:19:48.719 00:19:48.719 ' 00:19:48.719 21:27:12 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.719 21:27:12 -- nvmf/common.sh@7 -- # uname -s 00:19:48.719 21:27:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.719 21:27:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.719 21:27:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.719 21:27:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.719 21:27:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.719 21:27:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.719 21:27:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.719 21:27:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.719 21:27:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.719 21:27:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.720 21:27:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 00:19:48.720 21:27:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=4107dce9-1000-4a50-9f10-42d161d64cc8 00:19:48.720 21:27:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.720 21:27:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.720 21:27:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.720 21:27:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.720 21:27:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.720 21:27:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.720 21:27:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.720 21:27:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.720 21:27:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.720 21:27:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.720 21:27:12 -- paths/export.sh@5 -- # export PATH 00:19:48.720 21:27:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.720 21:27:12 -- nvmf/common.sh@46 -- # : 0 00:19:48.720 21:27:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.720 21:27:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.720 21:27:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.720 21:27:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.720 21:27:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.720 21:27:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.720 21:27:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.720 21:27:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.720 21:27:12 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:48.720 21:27:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:48.720 21:27:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.720 21:27:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.720 21:27:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.720 21:27:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.720 21:27:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.720 21:27:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:48.720 21:27:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.978 21:27:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:48.978 21:27:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:48.978 21:27:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:48.978 21:27:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:48.978 21:27:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:48.978 21:27:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:48.978 21:27:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.978 21:27:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.979 21:27:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:48.979 21:27:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:48.979 21:27:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:48.979 21:27:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:48.979 21:27:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:48.979 21:27:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.979 21:27:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:48.979 21:27:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:48.979 21:27:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:48.979 21:27:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:48.979 21:27:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:48.979 21:27:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:48.979 Cannot find device "nvmf_tgt_br" 00:19:48.979 21:27:12 -- nvmf/common.sh@154 -- # true 00:19:48.979 21:27:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:48.979 Cannot find device "nvmf_tgt_br2" 00:19:48.979 21:27:12 -- nvmf/common.sh@155 -- # true 00:19:48.979 21:27:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:48.979 21:27:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:48.979 Cannot find device "nvmf_tgt_br" 00:19:48.979 21:27:12 -- nvmf/common.sh@157 -- # true 00:19:48.979 21:27:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:48.979 Cannot find device "nvmf_tgt_br2" 00:19:48.979 21:27:12 -- nvmf/common.sh@158 -- # true 00:19:48.979 21:27:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:48.979 21:27:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:48.979 21:27:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.979 21:27:12 -- nvmf/common.sh@161 -- # true 00:19:48.979 21:27:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.979 21:27:12 -- nvmf/common.sh@162 -- # true 00:19:48.979 21:27:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.979 21:27:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.979 21:27:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.979 21:27:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.979 21:27:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.979 21:27:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.979 21:27:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.979 21:27:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.979 21:27:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.979 21:27:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:48.979 21:27:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:48.979 21:27:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:48.979 21:27:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:48.979 21:27:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.979 21:27:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.979 21:27:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.979 21:27:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:49.238 21:27:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:49.238 21:27:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.238 21:27:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.238 21:27:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.238 21:27:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.238 21:27:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.238 21:27:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:49.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:19:49.238 00:19:49.238 --- 10.0.0.2 ping statistics --- 00:19:49.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.238 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:49.238 21:27:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:49.238 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.238 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:49.238 00:19:49.238 --- 10.0.0.3 ping statistics --- 00:19:49.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.238 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:49.238 21:27:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:49.238 00:19:49.238 --- 10.0.0.1 ping statistics --- 00:19:49.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.238 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:49.238 21:27:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.238 21:27:12 -- nvmf/common.sh@421 -- # return 0 00:19:49.238 21:27:12 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:49.238 21:27:12 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:49.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.806 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:50.065 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:50.065 21:27:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.065 21:27:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.065 21:27:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.065 21:27:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.065 21:27:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.065 21:27:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.065 21:27:13 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:50.065 21:27:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.065 21:27:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:50.065 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:19:50.065 21:27:13 -- nvmf/common.sh@469 -- # nvmfpid=87405 00:19:50.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.065 21:27:13 -- nvmf/common.sh@470 -- # waitforlisten 87405 00:19:50.065 21:27:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:50.065 21:27:13 -- common/autotest_common.sh@829 -- # '[' -z 87405 ']' 00:19:50.065 21:27:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.065 21:27:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.065 21:27:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.065 21:27:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.065 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:19:50.065 [2024-11-28 21:27:13.718633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:50.065 [2024-11-28 21:27:13.718738] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.323 [2024-11-28 21:27:13.860144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.323 [2024-11-28 21:27:13.900370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.323 [2024-11-28 21:27:13.900791] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.323 [2024-11-28 21:27:13.900940] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.323 [2024-11-28 21:27:13.901195] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.323 [2024-11-28 21:27:13.901391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.323 [2024-11-28 21:27:13.902570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.323 [2024-11-28 21:27:13.902754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.323 [2024-11-28 21:27:13.902763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.259 21:27:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.259 21:27:14 -- common/autotest_common.sh@862 -- # return 0 00:19:51.259 21:27:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:51.259 21:27:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.259 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 21:27:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:51.259 21:27:14 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:51.259 21:27:14 -- scripts/common.sh@312 -- # local nvmes 00:19:51.259 21:27:14 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:51.259 21:27:14 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:51.259 21:27:14 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:51.259 21:27:14 -- scripts/common.sh@297 -- # local bdf= 00:19:51.259 21:27:14 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:51.259 21:27:14 -- scripts/common.sh@232 -- # local class 00:19:51.259 21:27:14 -- scripts/common.sh@233 -- # local subclass 00:19:51.259 21:27:14 -- scripts/common.sh@234 -- # local progif 00:19:51.259 21:27:14 -- scripts/common.sh@235 -- # printf %02x 1 00:19:51.259 21:27:14 -- scripts/common.sh@235 -- # class=01 00:19:51.259 21:27:14 -- scripts/common.sh@236 -- # printf %02x 8 00:19:51.259 21:27:14 -- scripts/common.sh@236 -- # subclass=08 00:19:51.259 21:27:14 -- scripts/common.sh@237 -- # printf %02x 2 00:19:51.259 21:27:14 -- scripts/common.sh@237 -- # progif=02 00:19:51.259 21:27:14 -- scripts/common.sh@239 -- # hash lspci 00:19:51.259 21:27:14 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:51.259 21:27:14 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:51.259 21:27:14 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:51.259 21:27:14 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:51.259 21:27:14 -- scripts/common.sh@244 -- # tr -d '"' 00:19:51.259 21:27:14 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:51.259 21:27:14 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:51.259 21:27:14 -- scripts/common.sh@15 -- # local i 00:19:51.259 21:27:14 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:51.259 21:27:14 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:51.259 21:27:14 -- scripts/common.sh@24 -- # return 0 00:19:51.259 21:27:14 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:51.259 21:27:14 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:51.259 21:27:14 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:51.259 21:27:14 -- scripts/common.sh@15 -- # local i 00:19:51.259 21:27:14 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:51.259 21:27:14 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:51.259 21:27:14 -- scripts/common.sh@24 -- # return 0 00:19:51.259 21:27:14 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:51.259 21:27:14 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:51.259 21:27:14 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:51.259 21:27:14 -- scripts/common.sh@322 -- # uname -s 00:19:51.259 21:27:14 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:51.259 21:27:14 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:51.259 21:27:14 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:51.259 21:27:14 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:51.259 21:27:14 -- scripts/common.sh@322 -- # uname -s 00:19:51.259 21:27:14 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:51.259 21:27:14 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:51.259 21:27:14 -- scripts/common.sh@327 -- # (( 2 )) 00:19:51.259 21:27:14 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:51.259 21:27:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:51.259 21:27:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.259 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 ************************************ 00:19:51.259 START TEST spdk_target_abort 00:19:51.259 ************************************ 00:19:51.259 21:27:14 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:51.259 21:27:14 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:51.259 21:27:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.259 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 spdk_targetn1 00:19:51.260 21:27:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.260 21:27:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.260 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.260 [2024-11-28 21:27:14.928425] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.260 21:27:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:51.260 21:27:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.260 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.260 21:27:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:51.260 21:27:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.260 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.260 21:27:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:51.260 21:27:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.260 21:27:14 -- common/autotest_common.sh@10 -- # set +x 00:19:51.260 [2024-11-28 21:27:14.956548] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.260 21:27:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:51.260 21:27:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:54.554 Initializing NVMe Controllers 00:19:54.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:54.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:54.554 Initialization complete. Launching workers. 00:19:54.554 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10185, failed: 0 00:19:54.554 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1058, failed to submit 9127 00:19:54.554 success 828, unsuccess 230, failed 0 00:19:54.554 21:27:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:54.554 21:27:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:57.854 Initializing NVMe Controllers 00:19:57.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:57.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:57.854 Initialization complete. Launching workers. 00:19:57.854 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9024, failed: 0 00:19:57.854 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1158, failed to submit 7866 00:19:57.854 success 419, unsuccess 739, failed 0 00:19:57.854 21:27:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:57.854 21:27:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:01.138 Initializing NVMe Controllers 00:20:01.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:01.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:01.138 Initialization complete. Launching workers. 00:20:01.138 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31560, failed: 0 00:20:01.138 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2284, failed to submit 29276 00:20:01.138 success 477, unsuccess 1807, failed 0 00:20:01.138 21:27:24 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:01.138 21:27:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.138 21:27:24 -- common/autotest_common.sh@10 -- # set +x 00:20:01.138 21:27:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.138 21:27:24 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:01.138 21:27:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.138 21:27:24 -- common/autotest_common.sh@10 -- # set +x 00:20:01.397 21:27:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.397 21:27:25 -- target/abort_qd_sizes.sh@62 -- # killprocess 87405 00:20:01.397 21:27:25 -- common/autotest_common.sh@936 -- # '[' -z 87405 ']' 00:20:01.397 21:27:25 -- common/autotest_common.sh@940 -- # kill -0 87405 00:20:01.397 21:27:25 -- common/autotest_common.sh@941 -- # uname 00:20:01.397 21:27:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.397 21:27:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87405 00:20:01.397 killing process with pid 87405 00:20:01.397 21:27:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:01.397 21:27:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:01.397 21:27:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87405' 00:20:01.397 21:27:25 -- common/autotest_common.sh@955 -- # kill 87405 00:20:01.397 21:27:25 -- common/autotest_common.sh@960 -- # wait 87405 00:20:01.656 00:20:01.656 real 0m10.347s 00:20:01.656 user 0m42.686s 00:20:01.656 sys 0m1.990s 00:20:01.656 21:27:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:01.656 ************************************ 00:20:01.656 END TEST spdk_target_abort 00:20:01.656 ************************************ 00:20:01.656 21:27:25 -- common/autotest_common.sh@10 -- # set +x 00:20:01.656 21:27:25 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:01.656 21:27:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:01.656 21:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.656 21:27:25 -- common/autotest_common.sh@10 -- # set +x 00:20:01.656 ************************************ 00:20:01.656 START TEST kernel_target_abort 00:20:01.656 ************************************ 00:20:01.656 21:27:25 -- common/autotest_common.sh@1114 -- # kernel_target 00:20:01.656 21:27:25 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:01.656 21:27:25 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:01.656 21:27:25 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:01.656 21:27:25 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:01.656 21:27:25 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:01.656 21:27:25 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:01.656 21:27:25 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:01.656 21:27:25 -- nvmf/common.sh@627 -- # local block nvme 00:20:01.656 21:27:25 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:01.656 21:27:25 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:01.656 21:27:25 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:01.656 21:27:25 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:01.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.915 Waiting for block devices as requested 00:20:02.173 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:02.173 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:02.173 21:27:25 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:02.173 21:27:25 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:02.173 21:27:25 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:02.173 21:27:25 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:02.173 21:27:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:02.173 No valid GPT data, bailing 00:20:02.173 21:27:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:02.173 21:27:25 -- scripts/common.sh@393 -- # pt= 00:20:02.173 21:27:25 -- scripts/common.sh@394 -- # return 1 00:20:02.173 21:27:25 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:02.173 21:27:25 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:02.173 21:27:25 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:02.173 21:27:25 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:02.173 21:27:25 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:02.173 21:27:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:02.432 No valid GPT data, bailing 00:20:02.432 21:27:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:02.433 21:27:25 -- scripts/common.sh@393 -- # pt= 00:20:02.433 21:27:25 -- scripts/common.sh@394 -- # return 1 00:20:02.433 21:27:25 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:02.433 21:27:25 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:02.433 21:27:25 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:02.433 21:27:25 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:02.433 21:27:25 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:02.433 21:27:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:02.433 No valid GPT data, bailing 00:20:02.433 21:27:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:02.433 21:27:26 -- scripts/common.sh@393 -- # pt= 00:20:02.433 21:27:26 -- scripts/common.sh@394 -- # return 1 00:20:02.433 21:27:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:02.433 21:27:26 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:02.433 21:27:26 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:02.433 21:27:26 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:02.433 21:27:26 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:02.433 21:27:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:02.433 No valid GPT data, bailing 00:20:02.433 21:27:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:02.433 21:27:26 -- scripts/common.sh@393 -- # pt= 00:20:02.433 21:27:26 -- scripts/common.sh@394 -- # return 1 00:20:02.433 21:27:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:02.433 21:27:26 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:02.433 21:27:26 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:02.433 21:27:26 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:02.433 21:27:26 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:02.433 21:27:26 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:02.433 21:27:26 -- nvmf/common.sh@654 -- # echo 1 00:20:02.433 21:27:26 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:02.433 21:27:26 -- nvmf/common.sh@656 -- # echo 1 00:20:02.433 21:27:26 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:02.433 21:27:26 -- nvmf/common.sh@663 -- # echo tcp 00:20:02.433 21:27:26 -- nvmf/common.sh@664 -- # echo 4420 00:20:02.433 21:27:26 -- nvmf/common.sh@665 -- # echo ipv4 00:20:02.433 21:27:26 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:02.433 21:27:26 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4107dce9-1000-4a50-9f10-42d161d64cc8 --hostid=4107dce9-1000-4a50-9f10-42d161d64cc8 -a 10.0.0.1 -t tcp -s 4420 00:20:02.433 00:20:02.433 Discovery Log Number of Records 2, Generation counter 2 00:20:02.433 =====Discovery Log Entry 0====== 00:20:02.433 trtype: tcp 00:20:02.433 adrfam: ipv4 00:20:02.433 subtype: current discovery subsystem 00:20:02.433 treq: not specified, sq flow control disable supported 00:20:02.433 portid: 1 00:20:02.433 trsvcid: 4420 00:20:02.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:02.433 traddr: 10.0.0.1 00:20:02.433 eflags: none 00:20:02.433 sectype: none 00:20:02.433 =====Discovery Log Entry 1====== 00:20:02.433 trtype: tcp 00:20:02.433 adrfam: ipv4 00:20:02.433 subtype: nvme subsystem 00:20:02.433 treq: not specified, sq flow control disable supported 00:20:02.433 portid: 1 00:20:02.433 trsvcid: 4420 00:20:02.433 subnqn: kernel_target 00:20:02.433 traddr: 10.0.0.1 00:20:02.433 eflags: none 00:20:02.433 sectype: none 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:02.433 21:27:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:05.721 Initializing NVMe Controllers 00:20:05.721 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:05.721 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:05.721 Initialization complete. Launching workers. 00:20:05.721 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31267, failed: 0 00:20:05.721 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31267, failed to submit 0 00:20:05.721 success 0, unsuccess 31267, failed 0 00:20:05.721 21:27:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:05.721 21:27:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:09.007 Initializing NVMe Controllers 00:20:09.007 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:09.007 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:09.007 Initialization complete. Launching workers. 00:20:09.007 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66095, failed: 0 00:20:09.007 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27831, failed to submit 38264 00:20:09.007 success 0, unsuccess 27831, failed 0 00:20:09.007 21:27:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:09.007 21:27:32 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:12.305 Initializing NVMe Controllers 00:20:12.305 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:12.305 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:12.305 Initialization complete. Launching workers. 00:20:12.305 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77995, failed: 0 00:20:12.305 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19470, failed to submit 58525 00:20:12.305 success 0, unsuccess 19470, failed 0 00:20:12.305 21:27:35 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:12.305 21:27:35 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:12.305 21:27:35 -- nvmf/common.sh@677 -- # echo 0 00:20:12.305 21:27:35 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:12.305 21:27:35 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:12.305 21:27:35 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:12.305 21:27:35 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:12.305 21:27:35 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:12.305 21:27:35 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:12.305 00:20:12.305 real 0m10.470s 00:20:12.305 user 0m5.565s 00:20:12.305 sys 0m2.358s 00:20:12.305 21:27:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:12.305 21:27:35 -- common/autotest_common.sh@10 -- # set +x 00:20:12.305 ************************************ 00:20:12.305 END TEST kernel_target_abort 00:20:12.305 ************************************ 00:20:12.305 21:27:35 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:12.305 21:27:35 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:12.305 21:27:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:12.305 21:27:35 -- nvmf/common.sh@116 -- # sync 00:20:12.305 21:27:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:12.305 21:27:35 -- nvmf/common.sh@119 -- # set +e 00:20:12.305 21:27:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:12.305 21:27:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:12.305 rmmod nvme_tcp 00:20:12.305 rmmod nvme_fabrics 00:20:12.305 rmmod nvme_keyring 00:20:12.305 21:27:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:12.305 21:27:35 -- nvmf/common.sh@123 -- # set -e 00:20:12.305 21:27:35 -- nvmf/common.sh@124 -- # return 0 00:20:12.305 21:27:35 -- nvmf/common.sh@477 -- # '[' -n 87405 ']' 00:20:12.305 21:27:35 -- nvmf/common.sh@478 -- # killprocess 87405 00:20:12.305 21:27:35 -- common/autotest_common.sh@936 -- # '[' -z 87405 ']' 00:20:12.305 21:27:35 -- common/autotest_common.sh@940 -- # kill -0 87405 00:20:12.305 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87405) - No such process 00:20:12.305 Process with pid 87405 is not found 00:20:12.305 21:27:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87405 is not found' 00:20:12.305 21:27:35 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:12.305 21:27:35 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:12.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:12.874 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:12.874 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:12.874 21:27:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:12.874 21:27:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:12.874 21:27:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.874 21:27:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:12.874 21:27:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.874 21:27:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:12.874 21:27:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.133 21:27:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:13.133 00:20:13.133 real 0m24.370s 00:20:13.133 user 0m49.718s 00:20:13.133 sys 0m5.641s 00:20:13.133 21:27:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:13.133 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:20:13.133 ************************************ 00:20:13.133 END TEST nvmf_abort_qd_sizes 00:20:13.133 ************************************ 00:20:13.133 21:27:36 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:13.133 21:27:36 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:13.133 21:27:36 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:13.133 21:27:36 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:13.133 21:27:36 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:13.133 21:27:36 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:13.133 21:27:36 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:13.133 21:27:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.133 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:20:13.133 21:27:36 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:13.133 21:27:36 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:13.133 21:27:36 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:13.133 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:20:15.039 INFO: APP EXITING 00:20:15.039 INFO: killing all VMs 00:20:15.039 INFO: killing vhost app 00:20:15.039 INFO: EXIT DONE 00:20:15.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.556 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:15.556 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:16.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.124 Cleaning 00:20:16.124 Removing: /var/run/dpdk/spdk0/config 00:20:16.124 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:16.124 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:16.124 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:16.124 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:16.124 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:16.124 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:16.124 Removing: /var/run/dpdk/spdk1/config 00:20:16.124 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:16.124 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:16.124 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:16.124 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:16.124 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:16.124 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:16.124 Removing: /var/run/dpdk/spdk2/config 00:20:16.124 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:16.124 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:16.124 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:16.124 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:16.124 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:16.124 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:16.124 Removing: /var/run/dpdk/spdk3/config 00:20:16.124 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:16.124 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:16.124 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:16.124 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:16.124 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:16.124 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:16.124 Removing: /var/run/dpdk/spdk4/config 00:20:16.124 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:16.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:16.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:16.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:16.383 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:16.383 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:16.383 Removing: /dev/shm/nvmf_trace.0 00:20:16.383 Removing: /dev/shm/spdk_tgt_trace.pid65582 00:20:16.383 Removing: /var/run/dpdk/spdk0 00:20:16.383 Removing: /var/run/dpdk/spdk1 00:20:16.383 Removing: /var/run/dpdk/spdk2 00:20:16.383 Removing: /var/run/dpdk/spdk3 00:20:16.383 Removing: /var/run/dpdk/spdk4 00:20:16.383 Removing: /var/run/dpdk/spdk_pid65434 00:20:16.383 Removing: /var/run/dpdk/spdk_pid65582 00:20:16.383 Removing: /var/run/dpdk/spdk_pid65835 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66020 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66173 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66239 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66322 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66420 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66504 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66537 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66567 00:20:16.383 Removing: /var/run/dpdk/spdk_pid66641 00:20:16.384 Removing: /var/run/dpdk/spdk_pid66722 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67154 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67206 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67252 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67268 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67329 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67345 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67407 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67423 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67468 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67486 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67532 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67550 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67674 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67704 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67791 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67837 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67856 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67920 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67934 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67963 00:20:16.384 Removing: /var/run/dpdk/spdk_pid67990 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68019 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68033 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68073 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68087 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68116 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68136 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68170 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68184 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68215 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68233 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68269 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68283 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68318 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68336 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68367 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68387 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68416 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68435 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68470 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68484 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68513 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68532 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68567 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68581 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68615 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68635 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68664 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68678 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68718 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68735 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68767 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68790 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68827 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68841 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68876 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68895 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68925 00:20:16.384 Removing: /var/run/dpdk/spdk_pid68997 00:20:16.384 Removing: /var/run/dpdk/spdk_pid69084 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69416 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69433 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69464 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69477 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69490 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69508 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69521 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69529 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69547 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69565 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69573 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69591 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69603 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69617 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69638 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69645 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69664 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69676 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69689 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69702 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69732 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69750 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69772 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69842 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69863 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69878 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69901 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69916 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69918 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69953 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69970 00:20:16.643 Removing: /var/run/dpdk/spdk_pid69991 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70004 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70006 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70008 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70021 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70023 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70025 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70038 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70059 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70091 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70095 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70124 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70133 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70135 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70176 00:20:16.643 Removing: /var/run/dpdk/spdk_pid70187 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70214 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70221 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70223 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70235 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70238 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70246 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70253 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70255 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70336 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70373 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70484 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70516 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70560 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70575 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70593 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70613 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70637 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70657 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70733 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70736 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70779 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70851 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70897 00:20:16.644 Removing: /var/run/dpdk/spdk_pid70920 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71018 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71054 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71090 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71309 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71401 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71429 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71744 00:20:16.644 Removing: /var/run/dpdk/spdk_pid71787 00:20:16.644 Removing: /var/run/dpdk/spdk_pid72098 00:20:16.644 Removing: /var/run/dpdk/spdk_pid72510 00:20:16.644 Removing: /var/run/dpdk/spdk_pid72781 00:20:16.644 Removing: /var/run/dpdk/spdk_pid73525 00:20:16.903 Removing: /var/run/dpdk/spdk_pid74353 00:20:16.903 Removing: /var/run/dpdk/spdk_pid74466 00:20:16.903 Removing: /var/run/dpdk/spdk_pid74528 00:20:16.903 Removing: /var/run/dpdk/spdk_pid75777 00:20:16.903 Removing: /var/run/dpdk/spdk_pid75999 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76300 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76409 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76548 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76570 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76598 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76625 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76709 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76851 00:20:16.903 Removing: /var/run/dpdk/spdk_pid76982 00:20:16.903 Removing: /var/run/dpdk/spdk_pid77064 00:20:16.903 Removing: /var/run/dpdk/spdk_pid77447 00:20:16.903 Removing: /var/run/dpdk/spdk_pid77802 00:20:16.903 Removing: /var/run/dpdk/spdk_pid77804 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80006 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80014 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80298 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80318 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80332 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80357 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80373 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80452 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80459 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80572 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80575 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80683 00:20:16.903 Removing: /var/run/dpdk/spdk_pid80685 00:20:16.903 Removing: /var/run/dpdk/spdk_pid81078 00:20:16.903 Removing: /var/run/dpdk/spdk_pid81125 00:20:16.903 Removing: /var/run/dpdk/spdk_pid81235 00:20:16.903 Removing: /var/run/dpdk/spdk_pid81313 00:20:16.903 Removing: /var/run/dpdk/spdk_pid81626 00:20:16.903 Removing: /var/run/dpdk/spdk_pid81830 00:20:16.903 Removing: /var/run/dpdk/spdk_pid82217 00:20:16.903 Removing: /var/run/dpdk/spdk_pid82746 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83200 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83262 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83315 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83363 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83478 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83536 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83602 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83649 00:20:16.903 Removing: /var/run/dpdk/spdk_pid83970 00:20:16.903 Removing: /var/run/dpdk/spdk_pid85157 00:20:16.903 Removing: /var/run/dpdk/spdk_pid85303 00:20:16.903 Removing: /var/run/dpdk/spdk_pid85541 00:20:16.903 Removing: /var/run/dpdk/spdk_pid86110 00:20:16.903 Removing: /var/run/dpdk/spdk_pid86274 00:20:16.903 Removing: /var/run/dpdk/spdk_pid86426 00:20:16.903 Removing: /var/run/dpdk/spdk_pid86523 00:20:16.903 Removing: /var/run/dpdk/spdk_pid86690 00:20:16.903 Removing: /var/run/dpdk/spdk_pid86799 00:20:16.903 Removing: /var/run/dpdk/spdk_pid87456 00:20:16.903 Removing: /var/run/dpdk/spdk_pid87491 00:20:16.903 Removing: /var/run/dpdk/spdk_pid87526 00:20:16.903 Removing: /var/run/dpdk/spdk_pid87775 00:20:16.903 Removing: /var/run/dpdk/spdk_pid87806 00:20:16.903 Removing: /var/run/dpdk/spdk_pid87841 00:20:16.903 Clean 00:20:17.163 killing process with pid 59801 00:20:17.163 killing process with pid 59806 00:20:17.163 21:27:40 -- common/autotest_common.sh@1446 -- # return 0 00:20:17.163 21:27:40 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:17.163 21:27:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.163 21:27:40 -- common/autotest_common.sh@10 -- # set +x 00:20:17.163 21:27:40 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:17.163 21:27:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.163 21:27:40 -- common/autotest_common.sh@10 -- # set +x 00:20:17.163 21:27:40 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:17.163 21:27:40 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:17.163 21:27:40 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:17.163 21:27:40 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:17.163 21:27:40 -- spdk/autotest.sh@383 -- # hostname 00:20:17.163 21:27:40 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:17.422 geninfo: WARNING: invalid characters removed from testname! 00:20:39.401 21:28:02 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.691 21:28:06 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:45.226 21:28:08 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:47.760 21:28:11 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.326 21:28:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:52.231 21:28:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:54.766 21:28:18 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:54.766 21:28:18 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:20:54.766 21:28:18 -- common/autotest_common.sh@1690 -- $ lcov --version 00:20:54.766 21:28:18 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:20:54.766 21:28:18 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:20:54.766 21:28:18 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:20:54.766 21:28:18 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:20:54.766 21:28:18 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:20:54.766 21:28:18 -- scripts/common.sh@335 -- $ IFS=.-: 00:20:54.766 21:28:18 -- scripts/common.sh@335 -- $ read -ra ver1 00:20:54.766 21:28:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:54.766 21:28:18 -- scripts/common.sh@336 -- $ read -ra ver2 00:20:54.766 21:28:18 -- scripts/common.sh@337 -- $ local 'op=<' 00:20:54.766 21:28:18 -- scripts/common.sh@339 -- $ ver1_l=2 00:20:54.766 21:28:18 -- scripts/common.sh@340 -- $ ver2_l=1 00:20:54.766 21:28:18 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:20:54.766 21:28:18 -- scripts/common.sh@343 -- $ case "$op" in 00:20:54.766 21:28:18 -- scripts/common.sh@344 -- $ : 1 00:20:54.766 21:28:18 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:20:54.766 21:28:18 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.766 21:28:18 -- scripts/common.sh@364 -- $ decimal 1 00:20:54.766 21:28:18 -- scripts/common.sh@352 -- $ local d=1 00:20:54.766 21:28:18 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:54.766 21:28:18 -- scripts/common.sh@354 -- $ echo 1 00:20:54.766 21:28:18 -- scripts/common.sh@364 -- $ ver1[v]=1 00:20:54.766 21:28:18 -- scripts/common.sh@365 -- $ decimal 2 00:20:54.766 21:28:18 -- scripts/common.sh@352 -- $ local d=2 00:20:54.766 21:28:18 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:54.766 21:28:18 -- scripts/common.sh@354 -- $ echo 2 00:20:54.766 21:28:18 -- scripts/common.sh@365 -- $ ver2[v]=2 00:20:54.766 21:28:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:20:54.766 21:28:18 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:20:54.766 21:28:18 -- scripts/common.sh@367 -- $ return 0 00:20:54.766 21:28:18 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.766 21:28:18 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:20:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.766 --rc genhtml_branch_coverage=1 00:20:54.766 --rc genhtml_function_coverage=1 00:20:54.766 --rc genhtml_legend=1 00:20:54.766 --rc geninfo_all_blocks=1 00:20:54.766 --rc geninfo_unexecuted_blocks=1 00:20:54.766 00:20:54.766 ' 00:20:54.766 21:28:18 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:20:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.766 --rc genhtml_branch_coverage=1 00:20:54.766 --rc genhtml_function_coverage=1 00:20:54.766 --rc genhtml_legend=1 00:20:54.766 --rc geninfo_all_blocks=1 00:20:54.766 --rc geninfo_unexecuted_blocks=1 00:20:54.766 00:20:54.766 ' 00:20:54.766 21:28:18 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:20:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.766 --rc genhtml_branch_coverage=1 00:20:54.766 --rc genhtml_function_coverage=1 00:20:54.766 --rc genhtml_legend=1 00:20:54.766 --rc geninfo_all_blocks=1 00:20:54.766 --rc geninfo_unexecuted_blocks=1 00:20:54.766 00:20:54.766 ' 00:20:54.766 21:28:18 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:20:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.766 --rc genhtml_branch_coverage=1 00:20:54.766 --rc genhtml_function_coverage=1 00:20:54.766 --rc genhtml_legend=1 00:20:54.766 --rc geninfo_all_blocks=1 00:20:54.766 --rc geninfo_unexecuted_blocks=1 00:20:54.766 00:20:54.766 ' 00:20:54.766 21:28:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.766 21:28:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:54.766 21:28:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.766 21:28:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.766 21:28:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.766 21:28:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.766 21:28:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.766 21:28:18 -- paths/export.sh@5 -- $ export PATH 00:20:54.766 21:28:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.766 21:28:18 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:54.766 21:28:18 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:54.766 21:28:18 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732829298.XXXXXX 00:20:54.766 21:28:18 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732829298.GBwhee 00:20:54.766 21:28:18 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:54.766 21:28:18 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:20:54.766 21:28:18 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:20:54.766 21:28:18 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:20:54.766 21:28:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:54.766 21:28:18 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:54.766 21:28:18 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:54.766 21:28:18 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:20:54.766 21:28:18 -- common/autotest_common.sh@10 -- $ set +x 00:20:55.025 21:28:18 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:20:55.025 21:28:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:55.025 21:28:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:55.025 21:28:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:55.025 21:28:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:55.025 21:28:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:55.025 21:28:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:55.025 21:28:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:55.025 21:28:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:55.025 21:28:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:55.025 21:28:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:55.025 + [[ -n 5965 ]] 00:20:55.025 + sudo kill 5965 00:20:55.035 [Pipeline] } 00:20:55.050 [Pipeline] // timeout 00:20:55.056 [Pipeline] } 00:20:55.072 [Pipeline] // stage 00:20:55.078 [Pipeline] } 00:20:55.092 [Pipeline] // catchError 00:20:55.102 [Pipeline] stage 00:20:55.104 [Pipeline] { (Stop VM) 00:20:55.117 [Pipeline] sh 00:20:55.397 + vagrant halt 00:20:58.687 ==> default: Halting domain... 00:21:05.262 [Pipeline] sh 00:21:05.541 + vagrant destroy -f 00:21:08.832 ==> default: Removing domain... 00:21:08.844 [Pipeline] sh 00:21:09.124 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:09.132 [Pipeline] } 00:21:09.146 [Pipeline] // stage 00:21:09.150 [Pipeline] } 00:21:09.163 [Pipeline] // dir 00:21:09.168 [Pipeline] } 00:21:09.181 [Pipeline] // wrap 00:21:09.186 [Pipeline] } 00:21:09.198 [Pipeline] // catchError 00:21:09.207 [Pipeline] stage 00:21:09.209 [Pipeline] { (Epilogue) 00:21:09.221 [Pipeline] sh 00:21:09.499 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:14.779 [Pipeline] catchError 00:21:14.781 [Pipeline] { 00:21:14.795 [Pipeline] sh 00:21:15.075 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:15.334 Artifacts sizes are good 00:21:15.342 [Pipeline] } 00:21:15.356 [Pipeline] // catchError 00:21:15.368 [Pipeline] archiveArtifacts 00:21:15.375 Archiving artifacts 00:21:15.509 [Pipeline] cleanWs 00:21:15.520 [WS-CLEANUP] Deleting project workspace... 00:21:15.521 [WS-CLEANUP] Deferred wipeout is used... 00:21:15.527 [WS-CLEANUP] done 00:21:15.529 [Pipeline] } 00:21:15.544 [Pipeline] // stage 00:21:15.550 [Pipeline] } 00:21:15.563 [Pipeline] // node 00:21:15.569 [Pipeline] End of Pipeline 00:21:15.611 Finished: SUCCESS